00:00:00.000 Started by upstream project "autotest-per-patch" build number 132406 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.550 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.561 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.572 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.572 > git config core.sparsecheckout # timeout=10 00:00:04.584 > git read-tree -mu HEAD # timeout=10 00:00:04.605 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.626 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.627 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.715 [Pipeline] Start of Pipeline 00:00:04.729 [Pipeline] library 00:00:04.731 Loading library shm_lib@master 00:00:04.732 Library shm_lib@master is cached. Copying from home. 00:00:04.751 [Pipeline] node 00:00:04.762 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.764 [Pipeline] { 00:00:04.776 [Pipeline] catchError 00:00:04.778 [Pipeline] { 00:00:04.792 [Pipeline] wrap 00:00:04.802 [Pipeline] { 00:00:04.811 [Pipeline] stage 00:00:04.813 [Pipeline] { (Prologue) 00:00:05.023 [Pipeline] sh 00:00:05.302 + logger -p user.info -t JENKINS-CI 00:00:05.321 [Pipeline] echo 00:00:05.323 Node: WFP8 00:00:05.330 [Pipeline] sh 00:00:05.625 [Pipeline] setCustomBuildProperty 00:00:05.637 [Pipeline] echo 00:00:05.638 Cleanup processes 00:00:05.642 [Pipeline] sh 00:00:05.921 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.921 2457046 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.933 [Pipeline] sh 00:00:06.212 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.212 ++ grep -v 'sudo pgrep' 00:00:06.212 ++ awk '{print $1}' 00:00:06.212 + sudo kill -9 00:00:06.212 + true 00:00:06.223 [Pipeline] cleanWs 00:00:06.231 [WS-CLEANUP] Deleting project workspace... 00:00:06.231 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.237 [WS-CLEANUP] done 00:00:06.243 [Pipeline] setCustomBuildProperty 00:00:06.259 [Pipeline] sh 00:00:06.538 + sudo git config --global --replace-all safe.directory '*' 00:00:06.621 [Pipeline] httpRequest 00:00:06.945 [Pipeline] echo 00:00:06.948 Sorcerer 10.211.164.20 is alive 00:00:06.956 [Pipeline] retry 00:00:06.958 [Pipeline] { 00:00:06.971 [Pipeline] httpRequest 00:00:06.974 HttpMethod: GET 00:00:06.975 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.975 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.978 Response Code: HTTP/1.1 200 OK 00:00:06.978 Success: Status code 200 is in the accepted range: 200,404 00:00:06.979 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.654 [Pipeline] } 00:00:07.668 [Pipeline] // retry 00:00:07.673 [Pipeline] sh 00:00:07.952 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.966 [Pipeline] httpRequest 00:00:08.556 [Pipeline] echo 00:00:08.558 Sorcerer 10.211.164.20 is alive 00:00:08.566 [Pipeline] retry 00:00:08.568 [Pipeline] { 00:00:08.580 [Pipeline] httpRequest 00:00:08.584 HttpMethod: GET 00:00:08.584 URL: http://10.211.164.20/packages/spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:00:08.585 Sending request to url: http://10.211.164.20/packages/spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:00:08.607 Response Code: HTTP/1.1 200 OK 00:00:08.608 Success: Status code 200 is in the accepted range: 200,404 00:00:08.608 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:01:01.432 [Pipeline] } 00:01:01.450 [Pipeline] // retry 00:01:01.459 [Pipeline] sh 00:01:01.742 + tar --no-same-owner -xf spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:01:04.287 [Pipeline] sh 00:01:04.572 + git -C spdk log --oneline -n5 00:01:04.572 c1691a126 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:04.572 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:01:04.572 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:04.572 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:01:04.572 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:04.583 [Pipeline] } 00:01:04.596 [Pipeline] // stage 00:01:04.603 [Pipeline] stage 00:01:04.606 [Pipeline] { (Prepare) 00:01:04.621 [Pipeline] writeFile 00:01:04.636 [Pipeline] sh 00:01:04.920 + logger -p user.info -t JENKINS-CI 00:01:04.932 [Pipeline] sh 00:01:05.216 + logger -p user.info -t JENKINS-CI 00:01:05.228 [Pipeline] sh 00:01:05.513 + cat autorun-spdk.conf 00:01:05.513 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.513 SPDK_TEST_NVMF=1 00:01:05.513 SPDK_TEST_NVME_CLI=1 00:01:05.513 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.513 SPDK_TEST_NVMF_NICS=e810 00:01:05.513 SPDK_TEST_VFIOUSER=1 00:01:05.513 SPDK_RUN_UBSAN=1 00:01:05.513 NET_TYPE=phy 00:01:05.520 RUN_NIGHTLY=0 00:01:05.524 [Pipeline] readFile 00:01:05.547 [Pipeline] withEnv 00:01:05.549 [Pipeline] { 00:01:05.559 [Pipeline] sh 00:01:05.843 + set -ex 00:01:05.843 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:05.843 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.843 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.843 ++ SPDK_TEST_NVMF=1 00:01:05.843 ++ SPDK_TEST_NVME_CLI=1 00:01:05.843 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.843 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.843 ++ SPDK_TEST_VFIOUSER=1 00:01:05.843 ++ SPDK_RUN_UBSAN=1 00:01:05.843 ++ NET_TYPE=phy 00:01:05.843 ++ RUN_NIGHTLY=0 00:01:05.843 + case $SPDK_TEST_NVMF_NICS in 00:01:05.843 + DRIVERS=ice 00:01:05.843 + [[ tcp == \r\d\m\a ]] 00:01:05.843 + [[ -n ice ]] 00:01:05.843 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:05.843 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:05.843 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:05.843 rmmod: ERROR: Module irdma is not currently loaded 00:01:05.843 rmmod: ERROR: Module i40iw is not currently loaded 00:01:05.843 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:05.843 + true 00:01:05.843 + for D in $DRIVERS 00:01:05.843 + sudo modprobe ice 00:01:05.843 + exit 0 00:01:05.852 [Pipeline] } 00:01:05.866 [Pipeline] // withEnv 00:01:05.870 [Pipeline] } 00:01:05.884 [Pipeline] // stage 00:01:05.892 [Pipeline] catchError 00:01:05.894 [Pipeline] { 00:01:05.905 [Pipeline] timeout 00:01:05.905 Timeout set to expire in 1 hr 0 min 00:01:05.907 [Pipeline] { 00:01:05.917 [Pipeline] stage 00:01:05.919 [Pipeline] { (Tests) 00:01:05.931 [Pipeline] sh 00:01:06.216 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:06.216 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:06.216 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:06.216 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:06.216 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.216 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:06.216 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:06.216 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:06.216 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:06.216 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:06.216 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:06.216 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:06.216 + source /etc/os-release 00:01:06.216 ++ NAME='Fedora Linux' 00:01:06.216 ++ VERSION='39 (Cloud Edition)' 00:01:06.216 ++ ID=fedora 00:01:06.216 ++ VERSION_ID=39 00:01:06.216 ++ VERSION_CODENAME= 00:01:06.216 ++ PLATFORM_ID=platform:f39 00:01:06.216 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:06.216 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:06.216 ++ LOGO=fedora-logo-icon 00:01:06.216 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:06.216 ++ HOME_URL=https://fedoraproject.org/ 00:01:06.216 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:06.216 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:06.216 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:06.216 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:06.216 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:06.216 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:06.216 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:06.216 ++ SUPPORT_END=2024-11-12 00:01:06.216 ++ VARIANT='Cloud Edition' 00:01:06.216 ++ VARIANT_ID=cloud 00:01:06.216 + uname -a 00:01:06.216 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:06.216 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:08.753 Hugepages 00:01:08.753 node hugesize free / total 00:01:08.753 node0 1048576kB 0 / 0 00:01:08.753 node0 2048kB 0 / 0 00:01:08.753 node1 1048576kB 0 / 0 00:01:08.753 node1 2048kB 0 / 0 00:01:08.753 00:01:08.753 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.753 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:08.753 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:08.753 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:08.753 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:08.753 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:08.753 + rm -f /tmp/spdk-ld-path 00:01:08.753 + source autorun-spdk.conf 00:01:08.753 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.753 ++ SPDK_TEST_NVMF=1 00:01:08.753 ++ SPDK_TEST_NVME_CLI=1 00:01:08.753 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.753 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.753 ++ SPDK_TEST_VFIOUSER=1 00:01:08.753 ++ SPDK_RUN_UBSAN=1 00:01:08.753 ++ NET_TYPE=phy 00:01:08.753 ++ RUN_NIGHTLY=0 00:01:08.753 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:08.753 + [[ -n '' ]] 00:01:08.753 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.013 + for M in /var/spdk/build-*-manifest.txt 00:01:09.013 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:09.013 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.013 + for M in /var/spdk/build-*-manifest.txt 00:01:09.013 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.013 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.013 + for M in /var/spdk/build-*-manifest.txt 00:01:09.013 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.013 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.013 ++ uname 00:01:09.013 + [[ Linux == \L\i\n\u\x ]] 00:01:09.013 + sudo dmesg -T 00:01:09.013 + sudo dmesg --clear 00:01:09.013 + dmesg_pid=2458439 00:01:09.013 + [[ Fedora Linux == FreeBSD ]] 00:01:09.013 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.013 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.013 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.013 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.013 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.013 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.013 + sudo dmesg -Tw 00:01:09.013 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.013 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.013 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.013 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.013 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.013 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.013 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.013 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.013 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.013 15:54:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:09.013 15:54:09 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:09.013 15:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:09.013 15:54:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:09.013 15:54:09 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.013 15:54:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:09.013 15:54:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:09.013 15:54:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:09.013 15:54:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.013 15:54:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.013 15:54:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.013 15:54:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.013 15:54:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.013 15:54:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.013 15:54:09 -- paths/export.sh@5 -- $ export PATH 00:01:09.013 15:54:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.013 15:54:09 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:09.013 15:54:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:09.013 15:54:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732114449.XXXXXX 00:01:09.013 15:54:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732114449.RetjUl 00:01:09.013 15:54:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:09.013 15:54:09 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:09.013 15:54:09 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:09.013 15:54:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:09.013 15:54:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.013 15:54:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:09.013 15:54:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:09.013 15:54:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.271 15:54:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:09.271 15:54:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:09.271 15:54:09 -- pm/common@17 -- $ local monitor 00:01:09.271 15:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.271 15:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.271 15:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.271 15:54:09 -- pm/common@21 -- $ date +%s 00:01:09.271 15:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.271 15:54:09 -- pm/common@21 -- $ date +%s 00:01:09.271 15:54:09 -- pm/common@25 -- $ sleep 1 00:01:09.271 15:54:09 -- pm/common@21 -- $ date +%s 00:01:09.271 15:54:09 -- pm/common@21 -- $ date +%s 00:01:09.271 15:54:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114449 00:01:09.271 15:54:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114449 00:01:09.271 15:54:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114449 00:01:09.271 15:54:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114449 00:01:09.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114449_collect-cpu-load.pm.log 00:01:09.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114449_collect-vmstat.pm.log 00:01:09.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114449_collect-cpu-temp.pm.log 00:01:09.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114449_collect-bmc-pm.bmc.pm.log 00:01:10.207 15:54:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:10.207 15:54:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.207 15:54:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.207 15:54:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.207 15:54:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.207 Wed Nov 20 02:54:10 PM UTC 2024 00:01:10.207 15:54:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.207 v25.01-pre-226-gc1691a126 00:01:10.207 15:54:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:10.207 15:54:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.207 15:54:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.207 15:54:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:10.207 15:54:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:10.207 15:54:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.207 ************************************ 00:01:10.207 START TEST ubsan 00:01:10.207 ************************************ 00:01:10.207 15:54:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:10.207 using ubsan 00:01:10.207 00:01:10.207 real 0m0.000s 00:01:10.207 user 0m0.000s 00:01:10.207 sys 0m0.000s 00:01:10.207 15:54:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:10.207 15:54:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.207 ************************************ 00:01:10.207 END TEST ubsan 00:01:10.207 ************************************ 00:01:10.207 15:54:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.207 15:54:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.207 15:54:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.207 15:54:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.207 15:54:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.207 15:54:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.207 15:54:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.207 15:54:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.207 15:54:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:10.466 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:10.466 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:10.724 Using 'verbs' RDMA provider 00:01:23.871 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.140 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.140 Creating mk/config.mk...done. 00:01:36.140 Creating mk/cc.flags.mk...done. 00:01:36.140 Type 'make' to build. 00:01:36.140 15:54:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:36.140 15:54:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:36.140 15:54:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:36.140 15:54:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.140 ************************************ 00:01:36.140 START TEST make 00:01:36.140 ************************************ 00:01:36.140 15:54:36 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:36.399 make[1]: Nothing to be done for 'all'. 00:01:37.785 The Meson build system 00:01:37.785 Version: 1.5.0 00:01:37.785 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:37.785 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.785 Build type: native build 00:01:37.785 Project name: libvfio-user 00:01:37.785 Project version: 0.0.1 00:01:37.785 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:37.785 C linker for the host machine: cc ld.bfd 2.40-14 00:01:37.785 Host machine cpu family: x86_64 00:01:37.785 Host machine cpu: x86_64 00:01:37.785 Run-time dependency threads found: YES 00:01:37.785 Library dl found: YES 00:01:37.785 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:37.785 Run-time dependency json-c found: YES 0.17 00:01:37.785 Run-time dependency cmocka found: YES 1.1.7 00:01:37.785 Program pytest-3 found: NO 00:01:37.785 Program flake8 found: NO 00:01:37.785 Program misspell-fixer found: NO 00:01:37.785 Program restructuredtext-lint found: NO 00:01:37.785 Program valgrind found: YES (/usr/bin/valgrind) 00:01:37.785 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.785 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.785 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.785 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.785 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:37.785 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:37.785 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.785 Build targets in project: 8 00:01:37.785 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:37.785 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:37.785 00:01:37.785 libvfio-user 0.0.1 00:01:37.785 00:01:37.785 User defined options 00:01:37.785 buildtype : debug 00:01:37.785 default_library: shared 00:01:37.785 libdir : /usr/local/lib 00:01:37.785 00:01:37.785 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.349 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.349 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:38.349 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:38.349 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:38.349 [4/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:38.349 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:38.349 [6/37] Compiling C object samples/null.p/null.c.o 00:01:38.349 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:38.349 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:38.349 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:38.349 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:38.349 [11/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:38.349 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:38.349 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:38.349 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:38.349 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:38.349 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:38.349 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:38.349 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:38.349 [19/37] Compiling C object samples/server.p/server.c.o 00:01:38.349 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:38.349 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:38.349 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:38.349 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:38.349 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:38.349 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:38.349 [26/37] Compiling C object samples/client.p/client.c.o 00:01:38.349 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:38.349 [28/37] Linking target samples/client 00:01:38.349 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:38.349 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:38.607 [31/37] Linking target test/unit_tests 00:01:38.607 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:38.607 [33/37] Linking target samples/null 00:01:38.607 [34/37] Linking target samples/server 00:01:38.607 [35/37] Linking target samples/gpio-pci-idio-16 00:01:38.607 [36/37] Linking target samples/lspci 00:01:38.607 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:38.607 INFO: autodetecting backend as ninja 00:01:38.607 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.607 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.175 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:39.175 ninja: no work to do. 00:01:44.450 The Meson build system 00:01:44.450 Version: 1.5.0 00:01:44.450 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:44.450 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:44.450 Build type: native build 00:01:44.450 Program cat found: YES (/usr/bin/cat) 00:01:44.450 Project name: DPDK 00:01:44.450 Project version: 24.03.0 00:01:44.450 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:44.450 C linker for the host machine: cc ld.bfd 2.40-14 00:01:44.450 Host machine cpu family: x86_64 00:01:44.450 Host machine cpu: x86_64 00:01:44.450 Message: ## Building in Developer Mode ## 00:01:44.450 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.450 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:44.450 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.450 Program python3 found: YES (/usr/bin/python3) 00:01:44.450 Program cat found: YES (/usr/bin/cat) 00:01:44.450 Compiler for C supports arguments -march=native: YES 00:01:44.450 Checking for size of "void *" : 8 00:01:44.450 Checking for size of "void *" : 8 (cached) 00:01:44.450 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:44.450 Library m found: YES 00:01:44.450 Library numa found: YES 00:01:44.450 Has header "numaif.h" : YES 00:01:44.450 Library fdt found: NO 00:01:44.450 Library execinfo found: NO 00:01:44.450 Has header "execinfo.h" : YES 00:01:44.450 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:44.450 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.450 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.450 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.450 Run-time dependency openssl found: YES 3.1.1 00:01:44.450 Run-time dependency libpcap found: YES 1.10.4 00:01:44.450 Has header "pcap.h" with dependency libpcap: YES 00:01:44.450 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.450 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.450 Compiler for C supports arguments -Wformat: YES 00:01:44.450 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.450 Compiler for C supports arguments -Wformat-security: NO 00:01:44.450 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.450 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.450 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.450 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.450 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.450 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.450 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.450 Compiler for C supports arguments -Wundef: YES 00:01:44.450 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.450 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.450 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.450 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.450 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.450 Program objdump found: YES (/usr/bin/objdump) 00:01:44.450 Compiler for C supports arguments -mavx512f: YES 00:01:44.450 Checking if "AVX512 checking" compiles: YES 00:01:44.450 Fetching value of define "__SSE4_2__" : 1 00:01:44.450 Fetching value of define "__AES__" : 1 00:01:44.450 Fetching value of define "__AVX__" : 1 00:01:44.450 Fetching value of define "__AVX2__" : 1 00:01:44.450 Fetching value of define "__AVX512BW__" : 1 00:01:44.450 Fetching value of define "__AVX512CD__" : 1 00:01:44.450 Fetching value of define "__AVX512DQ__" : 1 00:01:44.450 Fetching value of define "__AVX512F__" : 1 00:01:44.450 Fetching value of define "__AVX512VL__" : 1 00:01:44.450 Fetching value of define "__PCLMUL__" : 1 00:01:44.450 Fetching value of define "__RDRND__" : 1 00:01:44.450 Fetching value of define "__RDSEED__" : 1 00:01:44.450 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:44.450 Fetching value of define "__znver1__" : (undefined) 00:01:44.450 Fetching value of define "__znver2__" : (undefined) 00:01:44.450 Fetching value of define "__znver3__" : (undefined) 00:01:44.450 Fetching value of define "__znver4__" : (undefined) 00:01:44.450 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.450 Message: lib/log: Defining dependency "log" 00:01:44.450 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.450 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.450 Checking for function "getentropy" : NO 00:01:44.450 Message: lib/eal: Defining dependency "eal" 00:01:44.450 Message: lib/ring: Defining dependency "ring" 00:01:44.450 Message: lib/rcu: Defining dependency "rcu" 00:01:44.450 Message: lib/mempool: Defining dependency "mempool" 00:01:44.450 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.451 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.451 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.451 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.451 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.451 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.451 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:44.451 Compiler for C supports arguments -mpclmul: YES 00:01:44.451 Compiler for C supports arguments -maes: YES 00:01:44.451 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.451 Compiler for C supports arguments -mavx512bw: YES 00:01:44.451 Compiler for C supports arguments -mavx512dq: YES 00:01:44.451 Compiler for C supports arguments -mavx512vl: YES 00:01:44.451 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.451 Compiler for C supports arguments -mavx2: YES 00:01:44.451 Compiler for C supports arguments -mavx: YES 00:01:44.451 Message: lib/net: Defining dependency "net" 00:01:44.451 Message: lib/meter: Defining dependency "meter" 00:01:44.451 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.451 Message: lib/pci: Defining dependency "pci" 00:01:44.451 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.451 Message: lib/hash: Defining dependency "hash" 00:01:44.451 Message: lib/timer: Defining dependency "timer" 00:01:44.451 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.451 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.451 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.451 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.451 Message: lib/power: Defining dependency "power" 00:01:44.451 Message: lib/reorder: Defining dependency "reorder" 00:01:44.451 Message: lib/security: Defining dependency "security" 00:01:44.451 Has header "linux/userfaultfd.h" : YES 00:01:44.451 Has header "linux/vduse.h" : YES 00:01:44.451 Message: lib/vhost: Defining dependency "vhost" 00:01:44.451 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.451 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.451 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.451 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.451 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:44.451 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:44.451 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:44.451 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:44.451 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:44.451 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:44.451 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:44.451 Configuring doxy-api-html.conf using configuration 00:01:44.451 Configuring doxy-api-man.conf using configuration 00:01:44.451 Program mandb found: YES (/usr/bin/mandb) 00:01:44.451 Program sphinx-build found: NO 00:01:44.451 Configuring rte_build_config.h using configuration 00:01:44.451 Message: 00:01:44.451 ================= 00:01:44.451 Applications Enabled 00:01:44.451 ================= 00:01:44.451 00:01:44.451 apps: 00:01:44.451 00:01:44.451 00:01:44.451 Message: 00:01:44.451 ================= 00:01:44.451 Libraries Enabled 00:01:44.451 ================= 00:01:44.451 00:01:44.451 libs: 00:01:44.451 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.451 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:44.451 cryptodev, dmadev, power, reorder, security, vhost, 00:01:44.451 00:01:44.451 Message: 00:01:44.451 =============== 00:01:44.451 Drivers Enabled 00:01:44.451 =============== 00:01:44.451 00:01:44.451 common: 00:01:44.451 00:01:44.451 bus: 00:01:44.451 pci, vdev, 00:01:44.451 mempool: 00:01:44.451 ring, 00:01:44.451 dma: 00:01:44.451 00:01:44.451 net: 00:01:44.451 00:01:44.451 crypto: 00:01:44.451 00:01:44.451 compress: 00:01:44.451 00:01:44.451 vdpa: 00:01:44.451 00:01:44.451 00:01:44.451 Message: 00:01:44.451 ================= 00:01:44.451 Content Skipped 00:01:44.451 ================= 00:01:44.451 00:01:44.451 apps: 00:01:44.451 dumpcap: explicitly disabled via build config 00:01:44.451 graph: explicitly disabled via build config 00:01:44.451 pdump: explicitly disabled via build config 00:01:44.451 proc-info: explicitly disabled via build config 00:01:44.451 test-acl: explicitly disabled via build config 00:01:44.451 test-bbdev: explicitly disabled via build config 00:01:44.451 test-cmdline: explicitly disabled via build config 00:01:44.451 test-compress-perf: explicitly disabled via build config 00:01:44.451 test-crypto-perf: explicitly disabled via build config 00:01:44.451 test-dma-perf: explicitly disabled via build config 00:01:44.451 test-eventdev: explicitly disabled via build config 00:01:44.451 test-fib: explicitly disabled via build config 00:01:44.451 test-flow-perf: explicitly disabled via build config 00:01:44.451 test-gpudev: explicitly disabled via build config 00:01:44.451 test-mldev: explicitly disabled via build config 00:01:44.451 test-pipeline: explicitly disabled via build config 00:01:44.451 test-pmd: explicitly disabled via build config 00:01:44.451 test-regex: explicitly disabled via build config 00:01:44.451 test-sad: explicitly disabled via build config 00:01:44.451 test-security-perf: explicitly disabled via build config 00:01:44.451 00:01:44.451 libs: 00:01:44.451 argparse: explicitly disabled via build config 00:01:44.451 metrics: explicitly disabled via build config 00:01:44.451 acl: explicitly disabled via build config 00:01:44.451 bbdev: explicitly disabled via build config 00:01:44.451 bitratestats: explicitly disabled via build config 00:01:44.451 bpf: explicitly disabled via build config 00:01:44.451 cfgfile: explicitly disabled via build config 00:01:44.451 distributor: explicitly disabled via build config 00:01:44.451 efd: explicitly disabled via build config 00:01:44.451 eventdev: explicitly disabled via build config 00:01:44.451 dispatcher: explicitly disabled via build config 00:01:44.451 gpudev: explicitly disabled via build config 00:01:44.451 gro: explicitly disabled via build config 00:01:44.451 gso: explicitly disabled via build config 00:01:44.451 ip_frag: explicitly disabled via build config 00:01:44.451 jobstats: explicitly disabled via build config 00:01:44.451 latencystats: explicitly disabled via build config 00:01:44.451 lpm: explicitly disabled via build config 00:01:44.451 member: explicitly disabled via build config 00:01:44.451 pcapng: explicitly disabled via build config 00:01:44.451 rawdev: explicitly disabled via build config 00:01:44.451 regexdev: explicitly disabled via build config 00:01:44.451 mldev: explicitly disabled via build config 00:01:44.451 rib: explicitly disabled via build config 00:01:44.451 sched: explicitly disabled via build config 00:01:44.451 stack: explicitly disabled via build config 00:01:44.451 ipsec: explicitly disabled via build config 00:01:44.451 pdcp: explicitly disabled via build config 00:01:44.451 fib: explicitly disabled via build config 00:01:44.451 port: explicitly disabled via build config 00:01:44.451 pdump: explicitly disabled via build config 00:01:44.451 table: explicitly disabled via build config 00:01:44.451 pipeline: explicitly disabled via build config 00:01:44.451 graph: explicitly disabled via build config 00:01:44.451 node: explicitly disabled via build config 00:01:44.451 00:01:44.451 drivers: 00:01:44.451 common/cpt: not in enabled drivers build config 00:01:44.451 common/dpaax: not in enabled drivers build config 00:01:44.451 common/iavf: not in enabled drivers build config 00:01:44.451 common/idpf: not in enabled drivers build config 00:01:44.451 common/ionic: not in enabled drivers build config 00:01:44.451 common/mvep: not in enabled drivers build config 00:01:44.452 common/octeontx: not in enabled drivers build config 00:01:44.452 bus/auxiliary: not in enabled drivers build config 00:01:44.452 bus/cdx: not in enabled drivers build config 00:01:44.452 bus/dpaa: not in enabled drivers build config 00:01:44.452 bus/fslmc: not in enabled drivers build config 00:01:44.452 bus/ifpga: not in enabled drivers build config 00:01:44.452 bus/platform: not in enabled drivers build config 00:01:44.452 bus/uacce: not in enabled drivers build config 00:01:44.452 bus/vmbus: not in enabled drivers build config 00:01:44.452 common/cnxk: not in enabled drivers build config 00:01:44.452 common/mlx5: not in enabled drivers build config 00:01:44.452 common/nfp: not in enabled drivers build config 00:01:44.452 common/nitrox: not in enabled drivers build config 00:01:44.452 common/qat: not in enabled drivers build config 00:01:44.452 common/sfc_efx: not in enabled drivers build config 00:01:44.452 mempool/bucket: not in enabled drivers build config 00:01:44.452 mempool/cnxk: not in enabled drivers build config 00:01:44.452 mempool/dpaa: not in enabled drivers build config 00:01:44.452 mempool/dpaa2: not in enabled drivers build config 00:01:44.452 mempool/octeontx: not in enabled drivers build config 00:01:44.452 mempool/stack: not in enabled drivers build config 00:01:44.452 dma/cnxk: not in enabled drivers build config 00:01:44.452 dma/dpaa: not in enabled drivers build config 00:01:44.452 dma/dpaa2: not in enabled drivers build config 00:01:44.452 dma/hisilicon: not in enabled drivers build config 00:01:44.452 dma/idxd: not in enabled drivers build config 00:01:44.452 dma/ioat: not in enabled drivers build config 00:01:44.452 dma/skeleton: not in enabled drivers build config 00:01:44.452 net/af_packet: not in enabled drivers build config 00:01:44.452 net/af_xdp: not in enabled drivers build config 00:01:44.452 net/ark: not in enabled drivers build config 00:01:44.452 net/atlantic: not in enabled drivers build config 00:01:44.452 net/avp: not in enabled drivers build config 00:01:44.452 net/axgbe: not in enabled drivers build config 00:01:44.452 net/bnx2x: not in enabled drivers build config 00:01:44.452 net/bnxt: not in enabled drivers build config 00:01:44.452 net/bonding: not in enabled drivers build config 00:01:44.452 net/cnxk: not in enabled drivers build config 00:01:44.452 net/cpfl: not in enabled drivers build config 00:01:44.452 net/cxgbe: not in enabled drivers build config 00:01:44.452 net/dpaa: not in enabled drivers build config 00:01:44.452 net/dpaa2: not in enabled drivers build config 00:01:44.452 net/e1000: not in enabled drivers build config 00:01:44.452 net/ena: not in enabled drivers build config 00:01:44.452 net/enetc: not in enabled drivers build config 00:01:44.452 net/enetfec: not in enabled drivers build config 00:01:44.452 net/enic: not in enabled drivers build config 00:01:44.452 net/failsafe: not in enabled drivers build config 00:01:44.452 net/fm10k: not in enabled drivers build config 00:01:44.452 net/gve: not in enabled drivers build config 00:01:44.452 net/hinic: not in enabled drivers build config 00:01:44.452 net/hns3: not in enabled drivers build config 00:01:44.452 net/i40e: not in enabled drivers build config 00:01:44.452 net/iavf: not in enabled drivers build config 00:01:44.452 net/ice: not in enabled drivers build config 00:01:44.452 net/idpf: not in enabled drivers build config 00:01:44.452 net/igc: not in enabled drivers build config 00:01:44.452 net/ionic: not in enabled drivers build config 00:01:44.452 net/ipn3ke: not in enabled drivers build config 00:01:44.452 net/ixgbe: not in enabled drivers build config 00:01:44.452 net/mana: not in enabled drivers build config 00:01:44.452 net/memif: not in enabled drivers build config 00:01:44.452 net/mlx4: not in enabled drivers build config 00:01:44.452 net/mlx5: not in enabled drivers build config 00:01:44.452 net/mvneta: not in enabled drivers build config 00:01:44.452 net/mvpp2: not in enabled drivers build config 00:01:44.452 net/netvsc: not in enabled drivers build config 00:01:44.452 net/nfb: not in enabled drivers build config 00:01:44.452 net/nfp: not in enabled drivers build config 00:01:44.452 net/ngbe: not in enabled drivers build config 00:01:44.452 net/null: not in enabled drivers build config 00:01:44.452 net/octeontx: not in enabled drivers build config 00:01:44.452 net/octeon_ep: not in enabled drivers build config 00:01:44.452 net/pcap: not in enabled drivers build config 00:01:44.452 net/pfe: not in enabled drivers build config 00:01:44.452 net/qede: not in enabled drivers build config 00:01:44.452 net/ring: not in enabled drivers build config 00:01:44.452 net/sfc: not in enabled drivers build config 00:01:44.452 net/softnic: not in enabled drivers build config 00:01:44.452 net/tap: not in enabled drivers build config 00:01:44.452 net/thunderx: not in enabled drivers build config 00:01:44.452 net/txgbe: not in enabled drivers build config 00:01:44.452 net/vdev_netvsc: not in enabled drivers build config 00:01:44.452 net/vhost: not in enabled drivers build config 00:01:44.452 net/virtio: not in enabled drivers build config 00:01:44.452 net/vmxnet3: not in enabled drivers build config 00:01:44.452 raw/*: missing internal dependency, "rawdev" 00:01:44.452 crypto/armv8: not in enabled drivers build config 00:01:44.452 crypto/bcmfs: not in enabled drivers build config 00:01:44.452 crypto/caam_jr: not in enabled drivers build config 00:01:44.452 crypto/ccp: not in enabled drivers build config 00:01:44.452 crypto/cnxk: not in enabled drivers build config 00:01:44.452 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.452 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.452 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.452 crypto/mlx5: not in enabled drivers build config 00:01:44.452 crypto/mvsam: not in enabled drivers build config 00:01:44.452 crypto/nitrox: not in enabled drivers build config 00:01:44.452 crypto/null: not in enabled drivers build config 00:01:44.452 crypto/octeontx: not in enabled drivers build config 00:01:44.452 crypto/openssl: not in enabled drivers build config 00:01:44.452 crypto/scheduler: not in enabled drivers build config 00:01:44.452 crypto/uadk: not in enabled drivers build config 00:01:44.452 crypto/virtio: not in enabled drivers build config 00:01:44.452 compress/isal: not in enabled drivers build config 00:01:44.452 compress/mlx5: not in enabled drivers build config 00:01:44.452 compress/nitrox: not in enabled drivers build config 00:01:44.452 compress/octeontx: not in enabled drivers build config 00:01:44.452 compress/zlib: not in enabled drivers build config 00:01:44.452 regex/*: missing internal dependency, "regexdev" 00:01:44.452 ml/*: missing internal dependency, "mldev" 00:01:44.452 vdpa/ifc: not in enabled drivers build config 00:01:44.452 vdpa/mlx5: not in enabled drivers build config 00:01:44.452 vdpa/nfp: not in enabled drivers build config 00:01:44.452 vdpa/sfc: not in enabled drivers build config 00:01:44.452 event/*: missing internal dependency, "eventdev" 00:01:44.452 baseband/*: missing internal dependency, "bbdev" 00:01:44.452 gpu/*: missing internal dependency, "gpudev" 00:01:44.452 00:01:44.452 00:01:44.452 Build targets in project: 85 00:01:44.452 00:01:44.452 DPDK 24.03.0 00:01:44.452 00:01:44.452 User defined options 00:01:44.452 buildtype : debug 00:01:44.452 default_library : shared 00:01:44.452 libdir : lib 00:01:44.452 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:44.452 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:44.452 c_link_args : 00:01:44.452 cpu_instruction_set: native 00:01:44.453 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:44.453 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:44.453 enable_docs : false 00:01:44.453 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:44.453 enable_kmods : false 00:01:44.453 max_lcores : 128 00:01:44.453 tests : false 00:01:44.453 00:01:44.453 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.028 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.028 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.028 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.028 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.028 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.028 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.028 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.028 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.028 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.028 [9/268] Linking static target lib/librte_kvargs.a 00:01:45.028 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.028 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.028 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.028 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.028 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.028 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.028 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.028 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.028 [18/268] Linking static target lib/librte_log.a 00:01:45.028 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.288 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.288 [21/268] Linking static target lib/librte_pci.a 00:01:45.288 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.288 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.288 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.288 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.547 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.547 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.547 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:45.547 [29/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.547 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.547 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.547 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.547 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.547 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.547 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.547 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.547 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.547 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.547 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.547 [40/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.547 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.547 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.547 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.547 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.547 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.547 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.547 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.547 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.547 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.547 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.547 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.547 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.547 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.547 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.547 [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.547 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.547 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.547 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.547 [59/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.547 [60/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.547 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.547 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.547 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.547 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.547 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.547 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.547 [67/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.547 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.547 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.547 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.547 [71/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.547 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.547 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.547 [74/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.547 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.547 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.547 [77/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.547 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.547 [79/268] Linking static target lib/librte_meter.a 00:01:45.547 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.547 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.547 [82/268] Linking static target lib/librte_ring.a 00:01:45.547 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.547 [84/268] Linking static target lib/librte_telemetry.a 00:01:45.547 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.547 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.547 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.547 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.547 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.807 [90/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.807 [91/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.807 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.807 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.807 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.807 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.807 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.807 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.807 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.808 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.808 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:45.808 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.808 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.808 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.808 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.808 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.808 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.808 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.808 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.808 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.808 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.808 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.808 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.808 [113/268] Linking static target lib/librte_net.a 00:01:45.808 [114/268] Linking static target lib/librte_mempool.a 00:01:45.808 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.808 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.808 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.808 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.808 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.808 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:45.808 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.808 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.808 [123/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.808 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.808 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.808 [126/268] Linking static target lib/librte_rcu.a 00:01:45.808 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.808 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.808 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.808 [130/268] Linking static target lib/librte_eal.a 00:01:45.808 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.808 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.808 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.808 [134/268] Linking static target lib/librte_cmdline.a 00:01:45.808 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.808 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.808 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.066 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.066 [139/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.066 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.066 [141/268] Linking static target lib/librte_mbuf.a 00:01:46.066 [142/268] Linking static target lib/librte_timer.a 00:01:46.066 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.066 [144/268] Linking target lib/librte_log.so.24.1 00:01:46.066 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.066 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.066 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.066 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.066 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.066 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.066 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.066 [152/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.066 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.066 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.066 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.066 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.066 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.066 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.066 [159/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.066 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.066 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.066 [162/268] Linking static target lib/librte_dmadev.a 00:01:46.066 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.066 [164/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.066 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.066 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.066 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.066 [168/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:46.066 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.066 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.066 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.066 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.066 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.066 [174/268] Linking target lib/librte_kvargs.so.24.1 00:01:46.067 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.067 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.067 [177/268] Linking target lib/librte_telemetry.so.24.1 00:01:46.325 [178/268] Linking static target lib/librte_compressdev.a 00:01:46.325 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.325 [180/268] Linking static target lib/librte_power.a 00:01:46.325 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.325 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.326 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.326 [184/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.326 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.326 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.326 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.326 [188/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.326 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.326 [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.326 [191/268] Linking static target drivers/librte_bus_vdev.a 00:01:46.326 [192/268] Linking static target lib/librte_reorder.a 00:01:46.326 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.326 [194/268] Linking static target lib/librte_security.a 00:01:46.326 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.326 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.326 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.326 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.326 [199/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.326 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.326 [201/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.326 [202/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.583 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.583 [204/268] Linking static target lib/librte_hash.a 00:01:46.583 [205/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.583 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.583 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.583 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:46.583 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.583 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.583 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.583 [212/268] Linking static target drivers/librte_mempool_ring.a 00:01:46.583 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.583 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.583 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.583 [216/268] Linking static target lib/librte_ethdev.a 00:01:46.583 [217/268] Linking static target lib/librte_cryptodev.a 00:01:46.583 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.842 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.842 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.842 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.842 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.101 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:47.101 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.101 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.398 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.398 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.966 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.966 [229/268] Linking static target lib/librte_vhost.a 00:01:48.532 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.907 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.180 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.751 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.012 [234/268] Linking target lib/librte_eal.so.24.1 00:01:56.012 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:56.012 [236/268] Linking target lib/librte_ring.so.24.1 00:01:56.012 [237/268] Linking target lib/librte_meter.so.24.1 00:01:56.012 [238/268] Linking target lib/librte_timer.so.24.1 00:01:56.012 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:56.012 [240/268] Linking target lib/librte_pci.so.24.1 00:01:56.012 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:56.270 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:56.270 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:56.270 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:56.271 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:56.271 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:56.271 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:56.271 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:56.271 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:56.530 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:56.530 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:56.530 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:56.530 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:56.530 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:56.788 [255/268] Linking target lib/librte_net.so.24.1 00:01:56.788 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:56.788 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:56.788 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:56.788 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:56.788 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:56.788 [261/268] Linking target lib/librte_hash.so.24.1 00:01:56.788 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:56.788 [263/268] Linking target lib/librte_security.so.24.1 00:01:56.788 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:57.047 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.047 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.047 [267/268] Linking target lib/librte_power.so.24.1 00:01:57.047 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:57.047 INFO: autodetecting backend as ninja 00:01:57.047 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:07.026 CC lib/ut_mock/mock.o 00:02:07.026 CC lib/log/log.o 00:02:07.026 CC lib/ut/ut.o 00:02:07.026 CC lib/log/log_flags.o 00:02:07.026 CC lib/log/log_deprecated.o 00:02:07.285 LIB libspdk_ut_mock.a 00:02:07.285 LIB libspdk_ut.a 00:02:07.285 LIB libspdk_log.a 00:02:07.285 SO libspdk_ut_mock.so.6.0 00:02:07.285 SO libspdk_ut.so.2.0 00:02:07.285 SO libspdk_log.so.7.1 00:02:07.285 SYMLINK libspdk_ut_mock.so 00:02:07.285 SYMLINK libspdk_ut.so 00:02:07.285 SYMLINK libspdk_log.so 00:02:07.854 CC lib/dma/dma.o 00:02:07.854 CC lib/util/base64.o 00:02:07.854 CC lib/util/bit_array.o 00:02:07.854 CC lib/util/cpuset.o 00:02:07.854 CC lib/util/crc32.o 00:02:07.854 CC lib/util/crc16.o 00:02:07.854 CC lib/util/crc32c.o 00:02:07.854 CC lib/util/crc32_ieee.o 00:02:07.854 CC lib/ioat/ioat.o 00:02:07.854 CC lib/util/crc64.o 00:02:07.854 CC lib/util/dif.o 00:02:07.854 CC lib/util/fd.o 00:02:07.854 CXX lib/trace_parser/trace.o 00:02:07.854 CC lib/util/fd_group.o 00:02:07.854 CC lib/util/file.o 00:02:07.854 CC lib/util/hexlify.o 00:02:07.854 CC lib/util/iov.o 00:02:07.854 CC lib/util/math.o 00:02:07.854 CC lib/util/net.o 00:02:07.854 CC lib/util/pipe.o 00:02:07.854 CC lib/util/strerror_tls.o 00:02:07.854 CC lib/util/string.o 00:02:07.854 CC lib/util/uuid.o 00:02:07.854 CC lib/util/xor.o 00:02:07.854 CC lib/util/zipf.o 00:02:07.854 CC lib/util/md5.o 00:02:07.854 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.854 CC lib/vfio_user/host/vfio_user.o 00:02:07.854 LIB libspdk_dma.a 00:02:07.854 SO libspdk_dma.so.5.0 00:02:08.114 LIB libspdk_ioat.a 00:02:08.114 SYMLINK libspdk_dma.so 00:02:08.114 SO libspdk_ioat.so.7.0 00:02:08.114 SYMLINK libspdk_ioat.so 00:02:08.114 LIB libspdk_vfio_user.a 00:02:08.114 SO libspdk_vfio_user.so.5.0 00:02:08.114 LIB libspdk_util.a 00:02:08.114 SYMLINK libspdk_vfio_user.so 00:02:08.372 SO libspdk_util.so.10.1 00:02:08.373 SYMLINK libspdk_util.so 00:02:08.373 LIB libspdk_trace_parser.a 00:02:08.373 SO libspdk_trace_parser.so.6.0 00:02:08.630 SYMLINK libspdk_trace_parser.so 00:02:08.630 CC lib/conf/conf.o 00:02:08.630 CC lib/idxd/idxd.o 00:02:08.630 CC lib/json/json_parse.o 00:02:08.630 CC lib/idxd/idxd_user.o 00:02:08.630 CC lib/json/json_util.o 00:02:08.630 CC lib/idxd/idxd_kernel.o 00:02:08.630 CC lib/env_dpdk/env.o 00:02:08.630 CC lib/json/json_write.o 00:02:08.630 CC lib/env_dpdk/memory.o 00:02:08.630 CC lib/rdma_utils/rdma_utils.o 00:02:08.630 CC lib/env_dpdk/pci.o 00:02:08.630 CC lib/env_dpdk/init.o 00:02:08.630 CC lib/vmd/vmd.o 00:02:08.630 CC lib/env_dpdk/threads.o 00:02:08.630 CC lib/vmd/led.o 00:02:08.630 CC lib/env_dpdk/pci_ioat.o 00:02:08.630 CC lib/env_dpdk/pci_virtio.o 00:02:08.630 CC lib/env_dpdk/pci_vmd.o 00:02:08.630 CC lib/env_dpdk/pci_idxd.o 00:02:08.630 CC lib/env_dpdk/pci_event.o 00:02:08.630 CC lib/env_dpdk/sigbus_handler.o 00:02:08.630 CC lib/env_dpdk/pci_dpdk.o 00:02:08.630 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.630 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.889 LIB libspdk_conf.a 00:02:08.889 SO libspdk_conf.so.6.0 00:02:08.889 LIB libspdk_rdma_utils.a 00:02:08.889 LIB libspdk_json.a 00:02:08.889 SYMLINK libspdk_conf.so 00:02:09.148 SO libspdk_rdma_utils.so.1.0 00:02:09.148 SO libspdk_json.so.6.0 00:02:09.148 SYMLINK libspdk_rdma_utils.so 00:02:09.148 SYMLINK libspdk_json.so 00:02:09.148 LIB libspdk_idxd.a 00:02:09.148 SO libspdk_idxd.so.12.1 00:02:09.148 LIB libspdk_vmd.a 00:02:09.148 SO libspdk_vmd.so.6.0 00:02:09.406 SYMLINK libspdk_idxd.so 00:02:09.406 SYMLINK libspdk_vmd.so 00:02:09.406 CC lib/rdma_provider/common.o 00:02:09.406 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:09.406 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.406 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.406 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.406 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.666 LIB libspdk_rdma_provider.a 00:02:09.666 LIB libspdk_jsonrpc.a 00:02:09.666 SO libspdk_rdma_provider.so.7.0 00:02:09.666 SO libspdk_jsonrpc.so.6.0 00:02:09.666 SYMLINK libspdk_rdma_provider.so 00:02:09.666 SYMLINK libspdk_jsonrpc.so 00:02:09.666 LIB libspdk_env_dpdk.a 00:02:09.924 SO libspdk_env_dpdk.so.15.1 00:02:09.924 SYMLINK libspdk_env_dpdk.so 00:02:09.924 CC lib/rpc/rpc.o 00:02:10.183 LIB libspdk_rpc.a 00:02:10.183 SO libspdk_rpc.so.6.0 00:02:10.183 SYMLINK libspdk_rpc.so 00:02:10.749 CC lib/notify/notify.o 00:02:10.749 CC lib/trace/trace.o 00:02:10.749 CC lib/notify/notify_rpc.o 00:02:10.749 CC lib/trace/trace_flags.o 00:02:10.749 CC lib/trace/trace_rpc.o 00:02:10.749 CC lib/keyring/keyring.o 00:02:10.749 CC lib/keyring/keyring_rpc.o 00:02:10.749 LIB libspdk_notify.a 00:02:10.749 SO libspdk_notify.so.6.0 00:02:10.749 LIB libspdk_keyring.a 00:02:10.749 LIB libspdk_trace.a 00:02:10.749 SO libspdk_keyring.so.2.0 00:02:10.749 SYMLINK libspdk_notify.so 00:02:11.007 SO libspdk_trace.so.11.0 00:02:11.007 SYMLINK libspdk_keyring.so 00:02:11.007 SYMLINK libspdk_trace.so 00:02:11.266 CC lib/thread/thread.o 00:02:11.266 CC lib/thread/iobuf.o 00:02:11.266 CC lib/sock/sock.o 00:02:11.266 CC lib/sock/sock_rpc.o 00:02:11.524 LIB libspdk_sock.a 00:02:11.524 SO libspdk_sock.so.10.0 00:02:11.782 SYMLINK libspdk_sock.so 00:02:12.041 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:12.041 CC lib/nvme/nvme_ctrlr.o 00:02:12.041 CC lib/nvme/nvme_fabric.o 00:02:12.041 CC lib/nvme/nvme_ns_cmd.o 00:02:12.041 CC lib/nvme/nvme_ns.o 00:02:12.041 CC lib/nvme/nvme_pcie_common.o 00:02:12.041 CC lib/nvme/nvme_pcie.o 00:02:12.041 CC lib/nvme/nvme_qpair.o 00:02:12.041 CC lib/nvme/nvme.o 00:02:12.041 CC lib/nvme/nvme_quirks.o 00:02:12.041 CC lib/nvme/nvme_transport.o 00:02:12.041 CC lib/nvme/nvme_discovery.o 00:02:12.041 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.041 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.041 CC lib/nvme/nvme_tcp.o 00:02:12.041 CC lib/nvme/nvme_opal.o 00:02:12.041 CC lib/nvme/nvme_io_msg.o 00:02:12.041 CC lib/nvme/nvme_poll_group.o 00:02:12.041 CC lib/nvme/nvme_zns.o 00:02:12.041 CC lib/nvme/nvme_stubs.o 00:02:12.041 CC lib/nvme/nvme_auth.o 00:02:12.041 CC lib/nvme/nvme_cuse.o 00:02:12.041 CC lib/nvme/nvme_vfio_user.o 00:02:12.041 CC lib/nvme/nvme_rdma.o 00:02:12.299 LIB libspdk_thread.a 00:02:12.299 SO libspdk_thread.so.11.0 00:02:12.557 SYMLINK libspdk_thread.so 00:02:12.816 CC lib/accel/accel.o 00:02:12.816 CC lib/accel/accel_rpc.o 00:02:12.816 CC lib/accel/accel_sw.o 00:02:12.816 CC lib/fsdev/fsdev.o 00:02:12.816 CC lib/fsdev/fsdev_rpc.o 00:02:12.816 CC lib/fsdev/fsdev_io.o 00:02:12.816 CC lib/blob/blobstore.o 00:02:12.816 CC lib/blob/request.o 00:02:12.816 CC lib/vfu_tgt/tgt_endpoint.o 00:02:12.816 CC lib/blob/zeroes.o 00:02:12.816 CC lib/vfu_tgt/tgt_rpc.o 00:02:12.816 CC lib/blob/blob_bs_dev.o 00:02:12.816 CC lib/init/json_config.o 00:02:12.816 CC lib/init/subsystem.o 00:02:12.816 CC lib/init/subsystem_rpc.o 00:02:12.816 CC lib/init/rpc.o 00:02:12.816 CC lib/virtio/virtio.o 00:02:12.816 CC lib/virtio/virtio_vhost_user.o 00:02:12.816 CC lib/virtio/virtio_vfio_user.o 00:02:12.816 CC lib/virtio/virtio_pci.o 00:02:13.075 LIB libspdk_init.a 00:02:13.075 SO libspdk_init.so.6.0 00:02:13.075 LIB libspdk_vfu_tgt.a 00:02:13.075 LIB libspdk_virtio.a 00:02:13.075 SO libspdk_vfu_tgt.so.3.0 00:02:13.075 SYMLINK libspdk_init.so 00:02:13.075 SO libspdk_virtio.so.7.0 00:02:13.075 SYMLINK libspdk_vfu_tgt.so 00:02:13.075 SYMLINK libspdk_virtio.so 00:02:13.332 LIB libspdk_fsdev.a 00:02:13.332 SO libspdk_fsdev.so.2.0 00:02:13.332 CC lib/event/app.o 00:02:13.332 CC lib/event/reactor.o 00:02:13.332 CC lib/event/log_rpc.o 00:02:13.332 SYMLINK libspdk_fsdev.so 00:02:13.332 CC lib/event/app_rpc.o 00:02:13.332 CC lib/event/scheduler_static.o 00:02:13.591 LIB libspdk_accel.a 00:02:13.591 SO libspdk_accel.so.16.0 00:02:13.591 LIB libspdk_nvme.a 00:02:13.591 SYMLINK libspdk_accel.so 00:02:13.591 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:13.591 LIB libspdk_event.a 00:02:13.851 SO libspdk_nvme.so.15.0 00:02:13.851 SO libspdk_event.so.14.0 00:02:13.851 SYMLINK libspdk_event.so 00:02:13.851 SYMLINK libspdk_nvme.so 00:02:13.851 CC lib/bdev/bdev.o 00:02:13.851 CC lib/bdev/bdev_rpc.o 00:02:13.851 CC lib/bdev/bdev_zone.o 00:02:13.851 CC lib/bdev/part.o 00:02:13.851 CC lib/bdev/scsi_nvme.o 00:02:14.109 LIB libspdk_fuse_dispatcher.a 00:02:14.109 SO libspdk_fuse_dispatcher.so.1.0 00:02:14.369 SYMLINK libspdk_fuse_dispatcher.so 00:02:14.936 LIB libspdk_blob.a 00:02:14.936 SO libspdk_blob.so.11.0 00:02:14.936 SYMLINK libspdk_blob.so 00:02:15.502 CC lib/lvol/lvol.o 00:02:15.502 CC lib/blobfs/blobfs.o 00:02:15.502 CC lib/blobfs/tree.o 00:02:15.760 LIB libspdk_bdev.a 00:02:15.760 SO libspdk_bdev.so.17.0 00:02:16.019 LIB libspdk_blobfs.a 00:02:16.019 SYMLINK libspdk_bdev.so 00:02:16.019 SO libspdk_blobfs.so.10.0 00:02:16.019 LIB libspdk_lvol.a 00:02:16.019 SO libspdk_lvol.so.10.0 00:02:16.019 SYMLINK libspdk_blobfs.so 00:02:16.019 SYMLINK libspdk_lvol.so 00:02:16.278 CC lib/ublk/ublk.o 00:02:16.278 CC lib/ublk/ublk_rpc.o 00:02:16.278 CC lib/scsi/dev.o 00:02:16.278 CC lib/nvmf/ctrlr.o 00:02:16.278 CC lib/scsi/lun.o 00:02:16.278 CC lib/nbd/nbd.o 00:02:16.278 CC lib/nvmf/ctrlr_discovery.o 00:02:16.278 CC lib/nbd/nbd_rpc.o 00:02:16.278 CC lib/scsi/port.o 00:02:16.278 CC lib/nvmf/ctrlr_bdev.o 00:02:16.278 CC lib/scsi/scsi.o 00:02:16.278 CC lib/nvmf/subsystem.o 00:02:16.278 CC lib/scsi/scsi_bdev.o 00:02:16.278 CC lib/ftl/ftl_core.o 00:02:16.278 CC lib/nvmf/nvmf.o 00:02:16.278 CC lib/ftl/ftl_init.o 00:02:16.278 CC lib/nvmf/nvmf_rpc.o 00:02:16.278 CC lib/scsi/scsi_pr.o 00:02:16.278 CC lib/scsi/scsi_rpc.o 00:02:16.278 CC lib/ftl/ftl_layout.o 00:02:16.278 CC lib/nvmf/transport.o 00:02:16.278 CC lib/ftl/ftl_debug.o 00:02:16.278 CC lib/scsi/task.o 00:02:16.278 CC lib/nvmf/tcp.o 00:02:16.278 CC lib/nvmf/stubs.o 00:02:16.278 CC lib/ftl/ftl_io.o 00:02:16.278 CC lib/ftl/ftl_sb.o 00:02:16.278 CC lib/nvmf/mdns_server.o 00:02:16.278 CC lib/ftl/ftl_l2p.o 00:02:16.278 CC lib/nvmf/vfio_user.o 00:02:16.278 CC lib/nvmf/rdma.o 00:02:16.278 CC lib/nvmf/auth.o 00:02:16.278 CC lib/ftl/ftl_l2p_flat.o 00:02:16.278 CC lib/ftl/ftl_nv_cache.o 00:02:16.278 CC lib/ftl/ftl_band.o 00:02:16.278 CC lib/ftl/ftl_band_ops.o 00:02:16.278 CC lib/ftl/ftl_writer.o 00:02:16.278 CC lib/ftl/ftl_rq.o 00:02:16.278 CC lib/ftl/ftl_reloc.o 00:02:16.278 CC lib/ftl/ftl_l2p_cache.o 00:02:16.278 CC lib/ftl/ftl_p2l.o 00:02:16.278 CC lib/ftl/ftl_p2l_log.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.278 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.278 CC lib/ftl/utils/ftl_md.o 00:02:16.278 CC lib/ftl/utils/ftl_conf.o 00:02:16.278 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.278 CC lib/ftl/utils/ftl_mempool.o 00:02:16.278 CC lib/ftl/utils/ftl_property.o 00:02:16.278 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.278 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.278 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.278 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.278 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.278 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.278 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.278 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:16.278 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.278 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.278 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.278 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:16.278 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.278 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:16.278 CC lib/ftl/base/ftl_base_dev.o 00:02:16.278 CC lib/ftl/ftl_trace.o 00:02:16.844 LIB libspdk_nbd.a 00:02:16.844 LIB libspdk_scsi.a 00:02:16.844 SO libspdk_nbd.so.7.0 00:02:16.844 LIB libspdk_ublk.a 00:02:16.844 SO libspdk_scsi.so.9.0 00:02:16.844 SO libspdk_ublk.so.3.0 00:02:17.103 SYMLINK libspdk_nbd.so 00:02:17.103 SYMLINK libspdk_scsi.so 00:02:17.103 SYMLINK libspdk_ublk.so 00:02:17.103 LIB libspdk_ftl.a 00:02:17.361 CC lib/iscsi/conn.o 00:02:17.361 CC lib/iscsi/init_grp.o 00:02:17.361 CC lib/vhost/vhost.o 00:02:17.361 CC lib/iscsi/iscsi.o 00:02:17.361 CC lib/iscsi/param.o 00:02:17.361 CC lib/vhost/vhost_rpc.o 00:02:17.361 CC lib/vhost/vhost_scsi.o 00:02:17.361 CC lib/iscsi/portal_grp.o 00:02:17.361 CC lib/vhost/vhost_blk.o 00:02:17.361 CC lib/vhost/rte_vhost_user.o 00:02:17.361 CC lib/iscsi/tgt_node.o 00:02:17.361 CC lib/iscsi/iscsi_subsystem.o 00:02:17.361 CC lib/iscsi/iscsi_rpc.o 00:02:17.361 CC lib/iscsi/task.o 00:02:17.361 SO libspdk_ftl.so.9.0 00:02:17.621 SYMLINK libspdk_ftl.so 00:02:18.188 LIB libspdk_nvmf.a 00:02:18.188 SO libspdk_nvmf.so.20.0 00:02:18.188 LIB libspdk_vhost.a 00:02:18.188 SO libspdk_vhost.so.8.0 00:02:18.188 SYMLINK libspdk_vhost.so 00:02:18.188 SYMLINK libspdk_nvmf.so 00:02:18.446 LIB libspdk_iscsi.a 00:02:18.446 SO libspdk_iscsi.so.8.0 00:02:18.446 SYMLINK libspdk_iscsi.so 00:02:19.015 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.015 CC module/vfu_device/vfu_virtio.o 00:02:19.015 CC module/vfu_device/vfu_virtio_blk.o 00:02:19.015 CC module/vfu_device/vfu_virtio_scsi.o 00:02:19.015 CC module/vfu_device/vfu_virtio_fs.o 00:02:19.015 CC module/vfu_device/vfu_virtio_rpc.o 00:02:19.273 CC module/accel/ioat/accel_ioat.o 00:02:19.273 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.273 CC module/fsdev/aio/fsdev_aio.o 00:02:19.273 LIB libspdk_env_dpdk_rpc.a 00:02:19.273 CC module/sock/posix/posix.o 00:02:19.273 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:19.273 CC module/blob/bdev/blob_bdev.o 00:02:19.273 CC module/fsdev/aio/linux_aio_mgr.o 00:02:19.273 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.273 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.273 CC module/accel/error/accel_error.o 00:02:19.273 CC module/accel/error/accel_error_rpc.o 00:02:19.273 CC module/keyring/file/keyring_rpc.o 00:02:19.273 CC module/keyring/linux/keyring.o 00:02:19.273 CC module/keyring/file/keyring.o 00:02:19.273 CC module/keyring/linux/keyring_rpc.o 00:02:19.273 CC module/accel/dsa/accel_dsa.o 00:02:19.273 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.273 CC module/accel/iaa/accel_iaa.o 00:02:19.273 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.273 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.273 SO libspdk_env_dpdk_rpc.so.6.0 00:02:19.273 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.273 LIB libspdk_accel_ioat.a 00:02:19.273 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.273 LIB libspdk_scheduler_gscheduler.a 00:02:19.273 LIB libspdk_keyring_file.a 00:02:19.273 SO libspdk_accel_ioat.so.6.0 00:02:19.273 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:19.273 LIB libspdk_keyring_linux.a 00:02:19.273 LIB libspdk_scheduler_dynamic.a 00:02:19.273 SO libspdk_scheduler_gscheduler.so.4.0 00:02:19.273 SO libspdk_keyring_file.so.2.0 00:02:19.273 LIB libspdk_accel_error.a 00:02:19.273 LIB libspdk_accel_iaa.a 00:02:19.532 SO libspdk_scheduler_dynamic.so.4.0 00:02:19.532 SO libspdk_accel_error.so.2.0 00:02:19.532 SO libspdk_keyring_linux.so.1.0 00:02:19.532 SO libspdk_accel_iaa.so.3.0 00:02:19.532 SYMLINK libspdk_accel_ioat.so 00:02:19.532 LIB libspdk_blob_bdev.a 00:02:19.532 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.532 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.532 SYMLINK libspdk_keyring_file.so 00:02:19.532 LIB libspdk_accel_dsa.a 00:02:19.532 SO libspdk_blob_bdev.so.11.0 00:02:19.532 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.533 SYMLINK libspdk_accel_error.so 00:02:19.533 SO libspdk_accel_dsa.so.5.0 00:02:19.533 SYMLINK libspdk_keyring_linux.so 00:02:19.533 SYMLINK libspdk_accel_iaa.so 00:02:19.533 SYMLINK libspdk_accel_dsa.so 00:02:19.533 SYMLINK libspdk_blob_bdev.so 00:02:19.533 LIB libspdk_vfu_device.a 00:02:19.533 SO libspdk_vfu_device.so.3.0 00:02:19.533 SYMLINK libspdk_vfu_device.so 00:02:19.792 LIB libspdk_fsdev_aio.a 00:02:19.792 SO libspdk_fsdev_aio.so.1.0 00:02:19.792 LIB libspdk_sock_posix.a 00:02:19.792 SO libspdk_sock_posix.so.6.0 00:02:19.792 SYMLINK libspdk_fsdev_aio.so 00:02:19.792 SYMLINK libspdk_sock_posix.so 00:02:20.050 CC module/bdev/split/vbdev_split.o 00:02:20.050 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.050 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.050 CC module/bdev/malloc/bdev_malloc.o 00:02:20.050 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.050 CC module/bdev/gpt/gpt.o 00:02:20.050 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.050 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.050 CC module/bdev/null/bdev_null.o 00:02:20.050 CC module/bdev/delay/vbdev_delay.o 00:02:20.050 CC module/bdev/null/bdev_null_rpc.o 00:02:20.050 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.050 CC module/bdev/error/vbdev_error.o 00:02:20.050 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.050 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.050 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.050 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.050 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.050 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.050 CC module/bdev/ftl/bdev_ftl.o 00:02:20.050 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.050 CC module/bdev/nvme/bdev_nvme.o 00:02:20.050 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.050 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.050 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.050 CC module/bdev/nvme/nvme_rpc.o 00:02:20.050 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.050 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.050 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.050 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.050 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.050 CC module/bdev/nvme/vbdev_opal.o 00:02:20.050 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.050 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.050 CC module/bdev/raid/bdev_raid.o 00:02:20.050 CC module/bdev/aio/bdev_aio.o 00:02:20.050 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.050 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.050 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.050 CC module/bdev/raid/raid1.o 00:02:20.050 CC module/bdev/raid/raid0.o 00:02:20.050 CC module/bdev/raid/concat.o 00:02:20.309 LIB libspdk_bdev_split.a 00:02:20.309 LIB libspdk_blobfs_bdev.a 00:02:20.309 SO libspdk_bdev_split.so.6.0 00:02:20.309 SO libspdk_blobfs_bdev.so.6.0 00:02:20.309 LIB libspdk_bdev_gpt.a 00:02:20.309 LIB libspdk_bdev_error.a 00:02:20.309 SYMLINK libspdk_bdev_split.so 00:02:20.309 SYMLINK libspdk_blobfs_bdev.so 00:02:20.309 SO libspdk_bdev_gpt.so.6.0 00:02:20.309 LIB libspdk_bdev_passthru.a 00:02:20.309 SO libspdk_bdev_error.so.6.0 00:02:20.309 LIB libspdk_bdev_null.a 00:02:20.309 LIB libspdk_bdev_ftl.a 00:02:20.309 SO libspdk_bdev_passthru.so.6.0 00:02:20.309 SO libspdk_bdev_ftl.so.6.0 00:02:20.309 SO libspdk_bdev_null.so.6.0 00:02:20.309 LIB libspdk_bdev_zone_block.a 00:02:20.309 SYMLINK libspdk_bdev_gpt.so 00:02:20.309 LIB libspdk_bdev_malloc.a 00:02:20.309 LIB libspdk_bdev_delay.a 00:02:20.309 LIB libspdk_bdev_aio.a 00:02:20.309 LIB libspdk_bdev_iscsi.a 00:02:20.309 SO libspdk_bdev_zone_block.so.6.0 00:02:20.309 SYMLINK libspdk_bdev_error.so 00:02:20.309 SO libspdk_bdev_iscsi.so.6.0 00:02:20.309 SO libspdk_bdev_malloc.so.6.0 00:02:20.309 SO libspdk_bdev_delay.so.6.0 00:02:20.309 SYMLINK libspdk_bdev_null.so 00:02:20.309 SYMLINK libspdk_bdev_passthru.so 00:02:20.309 SYMLINK libspdk_bdev_ftl.so 00:02:20.309 SO libspdk_bdev_aio.so.6.0 00:02:20.568 SYMLINK libspdk_bdev_zone_block.so 00:02:20.568 SYMLINK libspdk_bdev_iscsi.so 00:02:20.568 SYMLINK libspdk_bdev_malloc.so 00:02:20.568 SYMLINK libspdk_bdev_delay.so 00:02:20.568 LIB libspdk_bdev_lvol.a 00:02:20.568 LIB libspdk_bdev_virtio.a 00:02:20.568 SYMLINK libspdk_bdev_aio.so 00:02:20.568 SO libspdk_bdev_lvol.so.6.0 00:02:20.568 SO libspdk_bdev_virtio.so.6.0 00:02:20.568 SYMLINK libspdk_bdev_lvol.so 00:02:20.568 SYMLINK libspdk_bdev_virtio.so 00:02:20.828 LIB libspdk_bdev_raid.a 00:02:20.828 SO libspdk_bdev_raid.so.6.0 00:02:21.088 SYMLINK libspdk_bdev_raid.so 00:02:22.026 LIB libspdk_bdev_nvme.a 00:02:22.026 SO libspdk_bdev_nvme.so.7.1 00:02:22.026 SYMLINK libspdk_bdev_nvme.so 00:02:22.594 CC module/event/subsystems/iobuf/iobuf.o 00:02:22.594 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:22.594 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:22.594 CC module/event/subsystems/vmd/vmd.o 00:02:22.594 CC module/event/subsystems/sock/sock.o 00:02:22.594 CC module/event/subsystems/keyring/keyring.o 00:02:22.594 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.594 CC module/event/subsystems/scheduler/scheduler.o 00:02:22.594 CC module/event/subsystems/fsdev/fsdev.o 00:02:22.594 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.854 LIB libspdk_event_keyring.a 00:02:22.854 LIB libspdk_event_vfu_tgt.a 00:02:22.854 LIB libspdk_event_sock.a 00:02:22.854 LIB libspdk_event_vmd.a 00:02:22.854 LIB libspdk_event_fsdev.a 00:02:22.854 LIB libspdk_event_scheduler.a 00:02:22.854 LIB libspdk_event_iobuf.a 00:02:22.854 LIB libspdk_event_vhost_blk.a 00:02:22.854 SO libspdk_event_iobuf.so.3.0 00:02:22.854 SO libspdk_event_scheduler.so.4.0 00:02:22.854 SO libspdk_event_keyring.so.1.0 00:02:22.854 SO libspdk_event_vfu_tgt.so.3.0 00:02:22.854 SO libspdk_event_sock.so.5.0 00:02:22.854 SO libspdk_event_fsdev.so.1.0 00:02:22.854 SO libspdk_event_vmd.so.6.0 00:02:22.854 SO libspdk_event_vhost_blk.so.3.0 00:02:22.854 SYMLINK libspdk_event_iobuf.so 00:02:22.854 SYMLINK libspdk_event_sock.so 00:02:22.854 SYMLINK libspdk_event_scheduler.so 00:02:22.854 SYMLINK libspdk_event_fsdev.so 00:02:22.854 SYMLINK libspdk_event_keyring.so 00:02:22.854 SYMLINK libspdk_event_vfu_tgt.so 00:02:22.854 SYMLINK libspdk_event_vmd.so 00:02:22.854 SYMLINK libspdk_event_vhost_blk.so 00:02:23.113 CC module/event/subsystems/accel/accel.o 00:02:23.373 LIB libspdk_event_accel.a 00:02:23.373 SO libspdk_event_accel.so.6.0 00:02:23.373 SYMLINK libspdk_event_accel.so 00:02:23.941 CC module/event/subsystems/bdev/bdev.o 00:02:23.941 LIB libspdk_event_bdev.a 00:02:23.941 SO libspdk_event_bdev.so.6.0 00:02:23.941 SYMLINK libspdk_event_bdev.so 00:02:24.510 CC module/event/subsystems/scsi/scsi.o 00:02:24.510 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.510 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.510 CC module/event/subsystems/ublk/ublk.o 00:02:24.510 CC module/event/subsystems/nbd/nbd.o 00:02:24.510 LIB libspdk_event_ublk.a 00:02:24.510 LIB libspdk_event_nbd.a 00:02:24.510 LIB libspdk_event_scsi.a 00:02:24.510 SO libspdk_event_ublk.so.3.0 00:02:24.510 SO libspdk_event_nbd.so.6.0 00:02:24.510 SO libspdk_event_scsi.so.6.0 00:02:24.510 LIB libspdk_event_nvmf.a 00:02:24.510 SYMLINK libspdk_event_scsi.so 00:02:24.510 SYMLINK libspdk_event_ublk.so 00:02:24.510 SYMLINK libspdk_event_nbd.so 00:02:24.510 SO libspdk_event_nvmf.so.6.0 00:02:24.769 SYMLINK libspdk_event_nvmf.so 00:02:24.769 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.769 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.028 LIB libspdk_event_iscsi.a 00:02:25.028 LIB libspdk_event_vhost_scsi.a 00:02:25.028 SO libspdk_event_iscsi.so.6.0 00:02:25.028 SO libspdk_event_vhost_scsi.so.3.0 00:02:25.028 SYMLINK libspdk_event_iscsi.so 00:02:25.028 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.288 SO libspdk.so.6.0 00:02:25.288 SYMLINK libspdk.so 00:02:25.546 CC app/trace_record/trace_record.o 00:02:25.546 CC app/spdk_nvme_identify/identify.o 00:02:25.546 CXX app/trace/trace.o 00:02:25.546 CC app/spdk_nvme_perf/perf.o 00:02:25.546 CC app/spdk_top/spdk_top.o 00:02:25.546 CC test/rpc_client/rpc_client_test.o 00:02:25.546 CC app/spdk_lspci/spdk_lspci.o 00:02:25.546 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.546 TEST_HEADER include/spdk/accel.h 00:02:25.546 TEST_HEADER include/spdk/accel_module.h 00:02:25.546 TEST_HEADER include/spdk/assert.h 00:02:25.814 TEST_HEADER include/spdk/barrier.h 00:02:25.814 TEST_HEADER include/spdk/base64.h 00:02:25.814 TEST_HEADER include/spdk/bdev.h 00:02:25.814 TEST_HEADER include/spdk/bit_array.h 00:02:25.814 TEST_HEADER include/spdk/bdev_module.h 00:02:25.814 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.814 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.814 TEST_HEADER include/spdk/bit_pool.h 00:02:25.814 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.814 TEST_HEADER include/spdk/blob.h 00:02:25.814 TEST_HEADER include/spdk/blobfs.h 00:02:25.814 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.814 TEST_HEADER include/spdk/conf.h 00:02:25.814 TEST_HEADER include/spdk/cpuset.h 00:02:25.814 TEST_HEADER include/spdk/crc16.h 00:02:25.814 TEST_HEADER include/spdk/config.h 00:02:25.814 TEST_HEADER include/spdk/crc32.h 00:02:25.814 TEST_HEADER include/spdk/crc64.h 00:02:25.814 TEST_HEADER include/spdk/dma.h 00:02:25.814 TEST_HEADER include/spdk/endian.h 00:02:25.814 TEST_HEADER include/spdk/dif.h 00:02:25.814 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.814 TEST_HEADER include/spdk/env.h 00:02:25.814 TEST_HEADER include/spdk/fd_group.h 00:02:25.814 TEST_HEADER include/spdk/event.h 00:02:25.814 TEST_HEADER include/spdk/fd.h 00:02:25.814 TEST_HEADER include/spdk/file.h 00:02:25.814 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.814 TEST_HEADER include/spdk/fsdev.h 00:02:25.814 TEST_HEADER include/spdk/ftl.h 00:02:25.814 TEST_HEADER include/spdk/fsdev_module.h 00:02:25.814 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.814 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:25.814 TEST_HEADER include/spdk/histogram_data.h 00:02:25.814 TEST_HEADER include/spdk/idxd.h 00:02:25.814 TEST_HEADER include/spdk/hexlify.h 00:02:25.814 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.814 TEST_HEADER include/spdk/ioat.h 00:02:25.814 CC app/spdk_tgt/spdk_tgt.o 00:02:25.814 TEST_HEADER include/spdk/init.h 00:02:25.814 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.814 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.814 CC app/nvmf_tgt/nvmf_main.o 00:02:25.814 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.814 TEST_HEADER include/spdk/keyring.h 00:02:25.814 TEST_HEADER include/spdk/json.h 00:02:25.814 TEST_HEADER include/spdk/likely.h 00:02:25.814 TEST_HEADER include/spdk/log.h 00:02:25.814 TEST_HEADER include/spdk/keyring_module.h 00:02:25.814 TEST_HEADER include/spdk/lvol.h 00:02:25.814 TEST_HEADER include/spdk/md5.h 00:02:25.814 TEST_HEADER include/spdk/memory.h 00:02:25.814 TEST_HEADER include/spdk/mmio.h 00:02:25.814 TEST_HEADER include/spdk/net.h 00:02:25.814 TEST_HEADER include/spdk/nbd.h 00:02:25.814 TEST_HEADER include/spdk/notify.h 00:02:25.814 TEST_HEADER include/spdk/nvme.h 00:02:25.814 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.814 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.814 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.814 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.814 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.814 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.814 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.814 TEST_HEADER include/spdk/nvmf.h 00:02:25.814 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.814 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.814 TEST_HEADER include/spdk/opal.h 00:02:25.814 TEST_HEADER include/spdk/pci_ids.h 00:02:25.814 TEST_HEADER include/spdk/pipe.h 00:02:25.814 TEST_HEADER include/spdk/opal_spec.h 00:02:25.814 TEST_HEADER include/spdk/reduce.h 00:02:25.814 TEST_HEADER include/spdk/queue.h 00:02:25.814 TEST_HEADER include/spdk/rpc.h 00:02:25.814 TEST_HEADER include/spdk/scheduler.h 00:02:25.814 CC app/spdk_dd/spdk_dd.o 00:02:25.814 TEST_HEADER include/spdk/scsi.h 00:02:25.814 TEST_HEADER include/spdk/sock.h 00:02:25.814 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.814 TEST_HEADER include/spdk/stdinc.h 00:02:25.814 TEST_HEADER include/spdk/string.h 00:02:25.814 TEST_HEADER include/spdk/trace.h 00:02:25.814 TEST_HEADER include/spdk/thread.h 00:02:25.814 TEST_HEADER include/spdk/tree.h 00:02:25.814 TEST_HEADER include/spdk/trace_parser.h 00:02:25.814 TEST_HEADER include/spdk/ublk.h 00:02:25.814 TEST_HEADER include/spdk/uuid.h 00:02:25.814 TEST_HEADER include/spdk/util.h 00:02:25.814 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.814 TEST_HEADER include/spdk/version.h 00:02:25.814 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.814 TEST_HEADER include/spdk/vhost.h 00:02:25.814 TEST_HEADER include/spdk/vmd.h 00:02:25.814 TEST_HEADER include/spdk/xor.h 00:02:25.814 CXX test/cpp_headers/accel.o 00:02:25.814 TEST_HEADER include/spdk/zipf.h 00:02:25.814 CXX test/cpp_headers/accel_module.o 00:02:25.814 CXX test/cpp_headers/assert.o 00:02:25.814 CXX test/cpp_headers/base64.o 00:02:25.814 CXX test/cpp_headers/bdev.o 00:02:25.814 CXX test/cpp_headers/bdev_module.o 00:02:25.814 CXX test/cpp_headers/barrier.o 00:02:25.814 CXX test/cpp_headers/bit_array.o 00:02:25.814 CXX test/cpp_headers/bdev_zone.o 00:02:25.814 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.814 CXX test/cpp_headers/blob_bdev.o 00:02:25.814 CXX test/cpp_headers/bit_pool.o 00:02:25.814 CXX test/cpp_headers/blobfs.o 00:02:25.814 CXX test/cpp_headers/conf.o 00:02:25.814 CXX test/cpp_headers/blob.o 00:02:25.814 CXX test/cpp_headers/config.o 00:02:25.814 CXX test/cpp_headers/cpuset.o 00:02:25.814 CXX test/cpp_headers/crc16.o 00:02:25.814 CXX test/cpp_headers/crc32.o 00:02:25.814 CXX test/cpp_headers/crc64.o 00:02:25.814 CXX test/cpp_headers/dif.o 00:02:25.814 CXX test/cpp_headers/env_dpdk.o 00:02:25.814 CXX test/cpp_headers/dma.o 00:02:25.814 CXX test/cpp_headers/endian.o 00:02:25.814 CXX test/cpp_headers/fd_group.o 00:02:25.814 CXX test/cpp_headers/fd.o 00:02:25.814 CXX test/cpp_headers/env.o 00:02:25.814 CXX test/cpp_headers/file.o 00:02:25.814 CXX test/cpp_headers/event.o 00:02:25.814 CXX test/cpp_headers/fsdev.o 00:02:25.814 CXX test/cpp_headers/fsdev_module.o 00:02:25.814 CXX test/cpp_headers/ftl.o 00:02:25.814 CXX test/cpp_headers/fuse_dispatcher.o 00:02:25.814 CXX test/cpp_headers/hexlify.o 00:02:25.814 CXX test/cpp_headers/gpt_spec.o 00:02:25.814 CXX test/cpp_headers/histogram_data.o 00:02:25.814 CXX test/cpp_headers/idxd.o 00:02:25.814 CXX test/cpp_headers/ioat.o 00:02:25.814 CXX test/cpp_headers/idxd_spec.o 00:02:25.814 CXX test/cpp_headers/init.o 00:02:25.814 CXX test/cpp_headers/iscsi_spec.o 00:02:25.814 CXX test/cpp_headers/ioat_spec.o 00:02:25.814 CXX test/cpp_headers/json.o 00:02:25.814 CC examples/ioat/verify/verify.o 00:02:25.814 CXX test/cpp_headers/keyring_module.o 00:02:25.814 CXX test/cpp_headers/keyring.o 00:02:25.814 CXX test/cpp_headers/jsonrpc.o 00:02:25.814 CXX test/cpp_headers/likely.o 00:02:25.814 CC examples/util/zipf/zipf.o 00:02:25.814 CXX test/cpp_headers/log.o 00:02:25.814 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.814 CXX test/cpp_headers/md5.o 00:02:25.814 CXX test/cpp_headers/lvol.o 00:02:25.814 CXX test/cpp_headers/memory.o 00:02:25.814 CXX test/cpp_headers/mmio.o 00:02:25.814 CC test/app/histogram_perf/histogram_perf.o 00:02:25.814 CXX test/cpp_headers/net.o 00:02:25.814 CXX test/cpp_headers/nbd.o 00:02:25.814 CXX test/cpp_headers/notify.o 00:02:25.814 CXX test/cpp_headers/nvme.o 00:02:25.814 CXX test/cpp_headers/nvme_intel.o 00:02:25.814 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.814 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.814 CC test/env/pci/pci_ut.o 00:02:25.814 CXX test/cpp_headers/nvme_zns.o 00:02:25.814 CXX test/cpp_headers/nvme_spec.o 00:02:25.814 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.814 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.814 CC test/app/jsoncat/jsoncat.o 00:02:25.814 CXX test/cpp_headers/nvmf.o 00:02:25.814 CC test/env/memory/memory_ut.o 00:02:25.814 CC test/thread/poller_perf/poller_perf.o 00:02:25.814 CC examples/ioat/perf/perf.o 00:02:25.814 CC app/fio/nvme/fio_plugin.o 00:02:25.814 CC test/env/vtophys/vtophys.o 00:02:25.814 CXX test/cpp_headers/nvmf_spec.o 00:02:25.814 CC test/app/stub/stub.o 00:02:25.814 CC app/fio/bdev/fio_plugin.o 00:02:25.814 CXX test/cpp_headers/nvmf_transport.o 00:02:25.814 CC test/dma/test_dma/test_dma.o 00:02:25.814 LINK spdk_lspci 00:02:25.814 CC test/app/bdev_svc/bdev_svc.o 00:02:26.078 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.078 LINK iscsi_tgt 00:02:26.344 LINK interrupt_tgt 00:02:26.344 LINK rpc_client_test 00:02:26.344 LINK spdk_trace_record 00:02:26.344 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.344 LINK spdk_tgt 00:02:26.344 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.344 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.344 LINK spdk_nvme_discover 00:02:26.344 LINK nvmf_tgt 00:02:26.344 LINK env_dpdk_post_init 00:02:26.344 LINK histogram_perf 00:02:26.344 LINK poller_perf 00:02:26.344 LINK vtophys 00:02:26.344 LINK stub 00:02:26.344 CXX test/cpp_headers/opal.o 00:02:26.344 CXX test/cpp_headers/opal_spec.o 00:02:26.344 CXX test/cpp_headers/pci_ids.o 00:02:26.344 LINK verify 00:02:26.344 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.344 CXX test/cpp_headers/pipe.o 00:02:26.344 CXX test/cpp_headers/queue.o 00:02:26.344 CXX test/cpp_headers/reduce.o 00:02:26.344 CXX test/cpp_headers/rpc.o 00:02:26.344 CXX test/cpp_headers/scheduler.o 00:02:26.344 CXX test/cpp_headers/scsi.o 00:02:26.344 CXX test/cpp_headers/sock.o 00:02:26.344 CXX test/cpp_headers/stdinc.o 00:02:26.344 CXX test/cpp_headers/scsi_spec.o 00:02:26.344 CXX test/cpp_headers/string.o 00:02:26.344 LINK jsoncat 00:02:26.344 CXX test/cpp_headers/thread.o 00:02:26.344 CXX test/cpp_headers/trace.o 00:02:26.344 CXX test/cpp_headers/trace_parser.o 00:02:26.344 LINK ioat_perf 00:02:26.344 CXX test/cpp_headers/ublk.o 00:02:26.344 CXX test/cpp_headers/tree.o 00:02:26.344 CXX test/cpp_headers/uuid.o 00:02:26.344 CXX test/cpp_headers/version.o 00:02:26.344 CXX test/cpp_headers/util.o 00:02:26.344 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.344 CXX test/cpp_headers/vhost.o 00:02:26.344 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.344 LINK zipf 00:02:26.344 CXX test/cpp_headers/vmd.o 00:02:26.344 CXX test/cpp_headers/zipf.o 00:02:26.344 CXX test/cpp_headers/xor.o 00:02:26.602 LINK spdk_dd 00:02:26.602 LINK bdev_svc 00:02:26.602 LINK pci_ut 00:02:26.602 LINK spdk_trace 00:02:26.862 LINK spdk_nvme 00:02:26.862 CC test/event/event_perf/event_perf.o 00:02:26.862 CC test/event/reactor_perf/reactor_perf.o 00:02:26.862 CC test/event/reactor/reactor.o 00:02:26.862 CC test/event/app_repeat/app_repeat.o 00:02:26.862 CC test/event/scheduler/scheduler.o 00:02:26.862 LINK test_dma 00:02:26.862 CC examples/sock/hello_world/hello_sock.o 00:02:26.862 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.862 CC examples/idxd/perf/perf.o 00:02:26.862 LINK spdk_bdev 00:02:26.862 CC examples/vmd/led/led.o 00:02:26.862 CC examples/thread/thread/thread_ex.o 00:02:26.862 LINK nvme_fuzz 00:02:26.862 LINK reactor 00:02:26.862 LINK reactor_perf 00:02:26.862 LINK event_perf 00:02:27.121 LINK vhost_fuzz 00:02:27.121 LINK spdk_nvme_perf 00:02:27.121 LINK app_repeat 00:02:27.121 LINK spdk_top 00:02:27.121 LINK mem_callbacks 00:02:27.121 LINK lsvmd 00:02:27.121 LINK spdk_nvme_identify 00:02:27.121 LINK led 00:02:27.121 CC app/vhost/vhost.o 00:02:27.121 LINK scheduler 00:02:27.121 LINK hello_sock 00:02:27.121 LINK thread 00:02:27.121 LINK idxd_perf 00:02:27.379 LINK vhost 00:02:27.379 LINK memory_ut 00:02:27.379 CC test/nvme/startup/startup.o 00:02:27.379 CC test/nvme/fdp/fdp.o 00:02:27.379 CC test/nvme/reserve/reserve.o 00:02:27.379 CC test/nvme/overhead/overhead.o 00:02:27.379 CC test/nvme/aer/aer.o 00:02:27.379 CC test/nvme/connect_stress/connect_stress.o 00:02:27.379 CC test/nvme/sgl/sgl.o 00:02:27.379 CC test/nvme/compliance/nvme_compliance.o 00:02:27.379 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.379 CC test/nvme/err_injection/err_injection.o 00:02:27.379 CC test/nvme/simple_copy/simple_copy.o 00:02:27.379 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.379 CC test/nvme/cuse/cuse.o 00:02:27.379 CC test/nvme/reset/reset.o 00:02:27.379 CC test/nvme/boot_partition/boot_partition.o 00:02:27.379 CC test/nvme/e2edp/nvme_dp.o 00:02:27.379 CC test/accel/dif/dif.o 00:02:27.379 CC test/blobfs/mkfs/mkfs.o 00:02:27.638 CC test/lvol/esnap/esnap.o 00:02:27.638 LINK startup 00:02:27.638 CC examples/nvme/reconnect/reconnect.o 00:02:27.638 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.638 CC examples/nvme/arbitration/arbitration.o 00:02:27.638 CC examples/nvme/hotplug/hotplug.o 00:02:27.638 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.638 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.638 CC examples/nvme/abort/abort.o 00:02:27.638 CC examples/nvme/hello_world/hello_world.o 00:02:27.638 LINK boot_partition 00:02:27.638 LINK reserve 00:02:27.638 LINK fused_ordering 00:02:27.638 LINK connect_stress 00:02:27.638 LINK doorbell_aers 00:02:27.638 LINK err_injection 00:02:27.638 LINK simple_copy 00:02:27.638 LINK aer 00:02:27.638 LINK reset 00:02:27.638 CC examples/accel/perf/accel_perf.o 00:02:27.638 LINK sgl 00:02:27.638 LINK nvme_dp 00:02:27.638 LINK mkfs 00:02:27.638 LINK overhead 00:02:27.638 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:27.638 LINK nvme_compliance 00:02:27.638 LINK fdp 00:02:27.638 CC examples/blob/hello_world/hello_blob.o 00:02:27.638 CC examples/blob/cli/blobcli.o 00:02:27.638 LINK pmr_persistence 00:02:27.897 LINK cmb_copy 00:02:27.897 LINK hello_world 00:02:27.897 LINK hotplug 00:02:27.897 LINK iscsi_fuzz 00:02:27.897 LINK reconnect 00:02:27.897 LINK arbitration 00:02:27.897 LINK abort 00:02:27.897 LINK hello_blob 00:02:27.897 LINK hello_fsdev 00:02:27.897 LINK nvme_manage 00:02:27.897 LINK dif 00:02:28.156 LINK accel_perf 00:02:28.156 LINK blobcli 00:02:28.415 LINK cuse 00:02:28.415 CC test/bdev/bdevio/bdevio.o 00:02:28.673 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.673 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.931 LINK hello_bdev 00:02:28.931 LINK bdevio 00:02:29.190 LINK bdevperf 00:02:29.759 CC examples/nvmf/nvmf/nvmf.o 00:02:30.018 LINK nvmf 00:02:31.056 LINK esnap 00:02:31.315 00:02:31.315 real 0m55.512s 00:02:31.315 user 8m0.981s 00:02:31.315 sys 3m41.536s 00:02:31.315 15:55:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:31.315 15:55:32 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.315 ************************************ 00:02:31.315 END TEST make 00:02:31.315 ************************************ 00:02:31.315 15:55:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.315 15:55:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.315 15:55:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.315 15:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.315 15:55:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.315 15:55:32 -- pm/common@44 -- $ pid=2458505 00:02:31.315 15:55:32 -- pm/common@50 -- $ kill -TERM 2458505 00:02:31.315 15:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.315 15:55:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.315 15:55:32 -- pm/common@44 -- $ pid=2458507 00:02:31.315 15:55:32 -- pm/common@50 -- $ kill -TERM 2458507 00:02:31.315 15:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.315 15:55:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.315 15:55:32 -- pm/common@44 -- $ pid=2458509 00:02:31.315 15:55:32 -- pm/common@50 -- $ kill -TERM 2458509 00:02:31.315 15:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.315 15:55:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.315 15:55:32 -- pm/common@44 -- $ pid=2458534 00:02:31.315 15:55:32 -- pm/common@50 -- $ sudo -E kill -TERM 2458534 00:02:31.575 15:55:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:31.575 15:55:32 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:31.575 15:55:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:31.575 15:55:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:31.575 15:55:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:31.575 15:55:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:31.575 15:55:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:31.575 15:55:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:31.575 15:55:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:31.575 15:55:32 -- scripts/common.sh@336 -- # IFS=.-: 00:02:31.575 15:55:32 -- scripts/common.sh@336 -- # read -ra ver1 00:02:31.575 15:55:32 -- scripts/common.sh@337 -- # IFS=.-: 00:02:31.575 15:55:32 -- scripts/common.sh@337 -- # read -ra ver2 00:02:31.575 15:55:32 -- scripts/common.sh@338 -- # local 'op=<' 00:02:31.575 15:55:32 -- scripts/common.sh@340 -- # ver1_l=2 00:02:31.575 15:55:32 -- scripts/common.sh@341 -- # ver2_l=1 00:02:31.575 15:55:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:31.575 15:55:32 -- scripts/common.sh@344 -- # case "$op" in 00:02:31.575 15:55:32 -- scripts/common.sh@345 -- # : 1 00:02:31.575 15:55:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:31.575 15:55:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.575 15:55:32 -- scripts/common.sh@365 -- # decimal 1 00:02:31.575 15:55:32 -- scripts/common.sh@353 -- # local d=1 00:02:31.575 15:55:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:31.575 15:55:32 -- scripts/common.sh@355 -- # echo 1 00:02:31.575 15:55:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:31.575 15:55:32 -- scripts/common.sh@366 -- # decimal 2 00:02:31.575 15:55:32 -- scripts/common.sh@353 -- # local d=2 00:02:31.575 15:55:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:31.575 15:55:32 -- scripts/common.sh@355 -- # echo 2 00:02:31.575 15:55:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:31.575 15:55:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:31.575 15:55:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:31.575 15:55:32 -- scripts/common.sh@368 -- # return 0 00:02:31.575 15:55:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:31.575 15:55:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.575 --rc genhtml_branch_coverage=1 00:02:31.575 --rc genhtml_function_coverage=1 00:02:31.575 --rc genhtml_legend=1 00:02:31.575 --rc geninfo_all_blocks=1 00:02:31.575 --rc geninfo_unexecuted_blocks=1 00:02:31.575 00:02:31.575 ' 00:02:31.575 15:55:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.575 --rc genhtml_branch_coverage=1 00:02:31.575 --rc genhtml_function_coverage=1 00:02:31.575 --rc genhtml_legend=1 00:02:31.575 --rc geninfo_all_blocks=1 00:02:31.575 --rc geninfo_unexecuted_blocks=1 00:02:31.575 00:02:31.575 ' 00:02:31.575 15:55:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.575 --rc genhtml_branch_coverage=1 00:02:31.575 --rc genhtml_function_coverage=1 00:02:31.575 --rc genhtml_legend=1 00:02:31.575 --rc geninfo_all_blocks=1 00:02:31.575 --rc geninfo_unexecuted_blocks=1 00:02:31.575 00:02:31.575 ' 00:02:31.575 15:55:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:31.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.575 --rc genhtml_branch_coverage=1 00:02:31.576 --rc genhtml_function_coverage=1 00:02:31.576 --rc genhtml_legend=1 00:02:31.576 --rc geninfo_all_blocks=1 00:02:31.576 --rc geninfo_unexecuted_blocks=1 00:02:31.576 00:02:31.576 ' 00:02:31.576 15:55:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.576 15:55:32 -- nvmf/common.sh@7 -- # uname -s 00:02:31.576 15:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.576 15:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.576 15:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.576 15:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.576 15:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.576 15:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.576 15:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.576 15:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.576 15:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.576 15:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.576 15:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:31.576 15:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:31.576 15:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.576 15:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.576 15:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:31.576 15:55:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:31.576 15:55:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.576 15:55:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:31.576 15:55:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.576 15:55:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.576 15:55:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.576 15:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.576 15:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.576 15:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.576 15:55:32 -- paths/export.sh@5 -- # export PATH 00:02:31.576 15:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.576 15:55:32 -- nvmf/common.sh@51 -- # : 0 00:02:31.576 15:55:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:31.576 15:55:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:31.576 15:55:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:31.576 15:55:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.576 15:55:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.576 15:55:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:31.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:31.576 15:55:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:31.576 15:55:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:31.576 15:55:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:31.576 15:55:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.576 15:55:32 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.576 15:55:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.576 15:55:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.576 15:55:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.576 15:55:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.576 15:55:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.576 15:55:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.576 15:55:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.576 15:55:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.576 15:55:32 -- spdk/autotest.sh@48 -- # udevadm_pid=2520986 00:02:31.576 15:55:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:31.576 15:55:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.576 15:55:32 -- pm/common@17 -- # local monitor 00:02:31.576 15:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.576 15:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.576 15:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.576 15:55:32 -- pm/common@21 -- # date +%s 00:02:31.576 15:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.576 15:55:32 -- pm/common@21 -- # date +%s 00:02:31.576 15:55:32 -- pm/common@25 -- # sleep 1 00:02:31.576 15:55:32 -- pm/common@21 -- # date +%s 00:02:31.576 15:55:32 -- pm/common@21 -- # date +%s 00:02:31.576 15:55:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114532 00:02:31.576 15:55:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114532 00:02:31.576 15:55:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114532 00:02:31.576 15:55:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114532 00:02:31.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114532_collect-vmstat.pm.log 00:02:31.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114532_collect-cpu-load.pm.log 00:02:31.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114532_collect-cpu-temp.pm.log 00:02:31.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114532_collect-bmc-pm.bmc.pm.log 00:02:32.774 15:55:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.774 15:55:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.774 15:55:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:32.774 15:55:33 -- common/autotest_common.sh@10 -- # set +x 00:02:32.774 15:55:33 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.774 15:55:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:32.774 15:55:33 -- common/autotest_common.sh@10 -- # set +x 00:02:32.774 15:55:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.774 15:55:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.774 15:55:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.774 15:55:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.774 15:55:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.774 15:55:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.774 15:55:33 -- common/autotest_common.sh@1457 -- # uname 00:02:32.774 15:55:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:32.774 15:55:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.774 15:55:33 -- common/autotest_common.sh@1477 -- # uname 00:02:32.774 15:55:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:32.774 15:55:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:32.774 15:55:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:32.774 lcov: LCOV version 1.15 00:02:32.774 15:55:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:44.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.861 15:55:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:59.861 15:55:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:59.861 15:55:58 -- common/autotest_common.sh@10 -- # set +x 00:02:59.861 15:55:58 -- spdk/autotest.sh@78 -- # rm -f 00:02:59.861 15:55:58 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.800 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:00.800 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:00.800 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:00.800 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:00.800 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:00.800 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:00.800 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:00.800 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:01.058 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:01.058 15:56:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:01.058 15:56:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:01.058 15:56:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:01.058 15:56:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:01.058 15:56:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:01.058 15:56:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:01.058 15:56:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:01.058 15:56:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.058 15:56:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:01.058 15:56:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:01.058 15:56:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:01.058 15:56:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:01.058 15:56:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:01.058 15:56:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:01.058 15:56:01 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.317 No valid GPT data, bailing 00:03:01.317 15:56:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.317 15:56:01 -- scripts/common.sh@394 -- # pt= 00:03:01.317 15:56:01 -- scripts/common.sh@395 -- # return 1 00:03:01.317 15:56:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.317 1+0 records in 00:03:01.317 1+0 records out 00:03:01.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00175016 s, 599 MB/s 00:03:01.317 15:56:01 -- spdk/autotest.sh@105 -- # sync 00:03:01.317 15:56:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.317 15:56:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.317 15:56:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:06.595 15:56:07 -- spdk/autotest.sh@111 -- # uname -s 00:03:06.595 15:56:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:06.595 15:56:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:06.595 15:56:07 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.903 Hugepages 00:03:09.903 node hugesize free / total 00:03:09.903 node0 1048576kB 0 / 0 00:03:09.903 node0 2048kB 0 / 0 00:03:09.903 node1 1048576kB 0 / 0 00:03:09.903 node1 2048kB 0 / 0 00:03:09.903 00:03:09.903 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.903 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:09.903 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:09.903 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:09.903 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:09.903 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:09.903 15:56:10 -- spdk/autotest.sh@117 -- # uname -s 00:03:09.903 15:56:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:09.903 15:56:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:09.903 15:56:10 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.442 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.442 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.442 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.442 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.442 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.442 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.442 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.702 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:13.639 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:13.639 15:56:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:14.578 15:56:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:14.578 15:56:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:14.578 15:56:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:14.578 15:56:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:14.578 15:56:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:14.578 15:56:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:14.578 15:56:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.578 15:56:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:14.578 15:56:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:14.578 15:56:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:14.578 15:56:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:14.578 15:56:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.871 Waiting for block devices as requested 00:03:17.871 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:17.871 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:17.871 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:17.871 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:17.871 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:17.871 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:18.132 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:18.132 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:18.132 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:18.132 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:18.392 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:18.392 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:18.392 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:18.651 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:18.651 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:18.651 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:18.651 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:18.911 15:56:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:18.911 15:56:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:18.911 15:56:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:18.911 15:56:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:18.911 15:56:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:18.911 15:56:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:18.911 15:56:19 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:18.911 15:56:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:18.911 15:56:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:18.911 15:56:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:18.911 15:56:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:18.911 15:56:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:18.911 15:56:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:18.911 15:56:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:18.911 15:56:19 -- common/autotest_common.sh@1543 -- # continue 00:03:18.911 15:56:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:18.911 15:56:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:18.911 15:56:19 -- common/autotest_common.sh@10 -- # set +x 00:03:18.911 15:56:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:18.911 15:56:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:18.911 15:56:19 -- common/autotest_common.sh@10 -- # set +x 00:03:18.911 15:56:19 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.266 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.266 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.836 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.836 15:56:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:22.836 15:56:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:22.836 15:56:23 -- common/autotest_common.sh@10 -- # set +x 00:03:22.836 15:56:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:22.836 15:56:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:22.836 15:56:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:22.836 15:56:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:22.836 15:56:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:22.836 15:56:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:22.836 15:56:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:22.836 15:56:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:22.836 15:56:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:22.836 15:56:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:22.836 15:56:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.836 15:56:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:22.836 15:56:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:22.836 15:56:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:22.836 15:56:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:22.836 15:56:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:22.836 15:56:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:22.836 15:56:23 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:22.836 15:56:23 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:22.836 15:56:23 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:22.836 15:56:23 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:22.836 15:56:23 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:22.836 15:56:23 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:22.836 15:56:23 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2535400 00:03:22.836 15:56:23 -- common/autotest_common.sh@1585 -- # waitforlisten 2535400 00:03:22.836 15:56:23 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:22.836 15:56:23 -- common/autotest_common.sh@835 -- # '[' -z 2535400 ']' 00:03:22.836 15:56:23 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:22.836 15:56:23 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:22.836 15:56:23 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:22.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:22.836 15:56:23 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:22.836 15:56:23 -- common/autotest_common.sh@10 -- # set +x 00:03:23.095 [2024-11-20 15:56:23.720344] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:23.095 [2024-11-20 15:56:23.720397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535400 ] 00:03:23.095 [2024-11-20 15:56:23.796944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.095 [2024-11-20 15:56:23.839671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.354 15:56:24 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:23.354 15:56:24 -- common/autotest_common.sh@868 -- # return 0 00:03:23.354 15:56:24 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:23.354 15:56:24 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:23.354 15:56:24 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:26.645 nvme0n1 00:03:26.645 15:56:27 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:26.645 [2024-11-20 15:56:27.229638] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:26.645 request: 00:03:26.645 { 00:03:26.645 "nvme_ctrlr_name": "nvme0", 00:03:26.645 "password": "test", 00:03:26.645 "method": "bdev_nvme_opal_revert", 00:03:26.645 "req_id": 1 00:03:26.645 } 00:03:26.645 Got JSON-RPC error response 00:03:26.645 response: 00:03:26.645 { 00:03:26.645 "code": -32602, 00:03:26.645 "message": "Invalid parameters" 00:03:26.645 } 00:03:26.645 15:56:27 -- common/autotest_common.sh@1591 -- # true 00:03:26.645 15:56:27 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:26.645 15:56:27 -- common/autotest_common.sh@1595 -- # killprocess 2535400 00:03:26.645 15:56:27 -- common/autotest_common.sh@954 -- # '[' -z 2535400 ']' 00:03:26.645 15:56:27 -- common/autotest_common.sh@958 -- # kill -0 2535400 00:03:26.645 15:56:27 -- common/autotest_common.sh@959 -- # uname 00:03:26.645 15:56:27 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:26.645 15:56:27 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535400 00:03:26.645 15:56:27 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:26.645 15:56:27 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:26.645 15:56:27 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535400' 00:03:26.645 killing process with pid 2535400 00:03:26.645 15:56:27 -- common/autotest_common.sh@973 -- # kill 2535400 00:03:26.645 15:56:27 -- common/autotest_common.sh@978 -- # wait 2535400 00:03:28.551 15:56:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:28.551 15:56:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:28.551 15:56:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:28.551 15:56:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:28.551 15:56:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:28.551 15:56:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.551 15:56:28 -- common/autotest_common.sh@10 -- # set +x 00:03:28.551 15:56:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:28.551 15:56:28 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:28.551 15:56:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.551 15:56:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.551 15:56:28 -- common/autotest_common.sh@10 -- # set +x 00:03:28.551 ************************************ 00:03:28.551 START TEST env 00:03:28.551 ************************************ 00:03:28.551 15:56:28 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:28.551 * Looking for test storage... 00:03:28.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:28.551 15:56:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.551 15:56:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.551 15:56:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.551 15:56:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.551 15:56:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.551 15:56:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.551 15:56:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.551 15:56:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.551 15:56:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.551 15:56:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.551 15:56:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.551 15:56:29 env -- scripts/common.sh@344 -- # case "$op" in 00:03:28.551 15:56:29 env -- scripts/common.sh@345 -- # : 1 00:03:28.551 15:56:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.551 15:56:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.551 15:56:29 env -- scripts/common.sh@365 -- # decimal 1 00:03:28.551 15:56:29 env -- scripts/common.sh@353 -- # local d=1 00:03:28.551 15:56:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.551 15:56:29 env -- scripts/common.sh@355 -- # echo 1 00:03:28.551 15:56:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.551 15:56:29 env -- scripts/common.sh@366 -- # decimal 2 00:03:28.551 15:56:29 env -- scripts/common.sh@353 -- # local d=2 00:03:28.551 15:56:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.551 15:56:29 env -- scripts/common.sh@355 -- # echo 2 00:03:28.551 15:56:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.551 15:56:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.551 15:56:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.551 15:56:29 env -- scripts/common.sh@368 -- # return 0 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.551 --rc genhtml_branch_coverage=1 00:03:28.551 --rc genhtml_function_coverage=1 00:03:28.551 --rc genhtml_legend=1 00:03:28.551 --rc geninfo_all_blocks=1 00:03:28.551 --rc geninfo_unexecuted_blocks=1 00:03:28.551 00:03:28.551 ' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.551 --rc genhtml_branch_coverage=1 00:03:28.551 --rc genhtml_function_coverage=1 00:03:28.551 --rc genhtml_legend=1 00:03:28.551 --rc geninfo_all_blocks=1 00:03:28.551 --rc geninfo_unexecuted_blocks=1 00:03:28.551 00:03:28.551 ' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.551 --rc genhtml_branch_coverage=1 00:03:28.551 --rc genhtml_function_coverage=1 00:03:28.551 --rc genhtml_legend=1 00:03:28.551 --rc geninfo_all_blocks=1 00:03:28.551 --rc geninfo_unexecuted_blocks=1 00:03:28.551 00:03:28.551 ' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.551 --rc genhtml_branch_coverage=1 00:03:28.551 --rc genhtml_function_coverage=1 00:03:28.551 --rc genhtml_legend=1 00:03:28.551 --rc geninfo_all_blocks=1 00:03:28.551 --rc geninfo_unexecuted_blocks=1 00:03:28.551 00:03:28.551 ' 00:03:28.551 15:56:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.551 15:56:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.551 ************************************ 00:03:28.551 START TEST env_memory 00:03:28.551 ************************************ 00:03:28.551 15:56:29 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:28.551 00:03:28.551 00:03:28.551 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.551 http://cunit.sourceforge.net/ 00:03:28.551 00:03:28.551 00:03:28.551 Suite: memory 00:03:28.551 Test: alloc and free memory map ...[2024-11-20 15:56:29.208838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:28.551 passed 00:03:28.551 Test: mem map translation ...[2024-11-20 15:56:29.227982] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:28.551 [2024-11-20 15:56:29.227996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:28.551 [2024-11-20 15:56:29.228032] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:28.551 [2024-11-20 15:56:29.228039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:28.551 passed 00:03:28.551 Test: mem map registration ...[2024-11-20 15:56:29.266022] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:28.551 [2024-11-20 15:56:29.266036] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:28.551 passed 00:03:28.551 Test: mem map adjacent registrations ...passed 00:03:28.551 00:03:28.551 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.551 suites 1 1 n/a 0 0 00:03:28.551 tests 4 4 4 0 0 00:03:28.551 asserts 152 152 152 0 n/a 00:03:28.551 00:03:28.551 Elapsed time = 0.141 seconds 00:03:28.551 00:03:28.551 real 0m0.154s 00:03:28.551 user 0m0.147s 00:03:28.551 sys 0m0.006s 00:03:28.551 15:56:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.551 15:56:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:28.551 ************************************ 00:03:28.551 END TEST env_memory 00:03:28.551 ************************************ 00:03:28.551 15:56:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.551 15:56:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.551 15:56:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.551 ************************************ 00:03:28.551 START TEST env_vtophys 00:03:28.551 ************************************ 00:03:28.551 15:56:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:28.811 EAL: lib.eal log level changed from notice to debug 00:03:28.811 EAL: Detected lcore 0 as core 0 on socket 0 00:03:28.811 EAL: Detected lcore 1 as core 1 on socket 0 00:03:28.811 EAL: Detected lcore 2 as core 2 on socket 0 00:03:28.811 EAL: Detected lcore 3 as core 3 on socket 0 00:03:28.811 EAL: Detected lcore 4 as core 4 on socket 0 00:03:28.811 EAL: Detected lcore 5 as core 5 on socket 0 00:03:28.811 EAL: Detected lcore 6 as core 6 on socket 0 00:03:28.811 EAL: Detected lcore 7 as core 8 on socket 0 00:03:28.811 EAL: Detected lcore 8 as core 9 on socket 0 00:03:28.811 EAL: Detected lcore 9 as core 10 on socket 0 00:03:28.811 EAL: Detected lcore 10 as core 11 on socket 0 00:03:28.811 EAL: Detected lcore 11 as core 12 on socket 0 00:03:28.811 EAL: Detected lcore 12 as core 13 on socket 0 00:03:28.811 EAL: Detected lcore 13 as core 16 on socket 0 00:03:28.811 EAL: Detected lcore 14 as core 17 on socket 0 00:03:28.811 EAL: Detected lcore 15 as core 18 on socket 0 00:03:28.811 EAL: Detected lcore 16 as core 19 on socket 0 00:03:28.811 EAL: Detected lcore 17 as core 20 on socket 0 00:03:28.811 EAL: Detected lcore 18 as core 21 on socket 0 00:03:28.811 EAL: Detected lcore 19 as core 25 on socket 0 00:03:28.811 EAL: Detected lcore 20 as core 26 on socket 0 00:03:28.811 EAL: Detected lcore 21 as core 27 on socket 0 00:03:28.811 EAL: Detected lcore 22 as core 28 on socket 0 00:03:28.811 EAL: Detected lcore 23 as core 29 on socket 0 00:03:28.811 EAL: Detected lcore 24 as core 0 on socket 1 00:03:28.811 EAL: Detected lcore 25 as core 1 on socket 1 00:03:28.811 EAL: Detected lcore 26 as core 2 on socket 1 00:03:28.811 EAL: Detected lcore 27 as core 3 on socket 1 00:03:28.811 EAL: Detected lcore 28 as core 4 on socket 1 00:03:28.811 EAL: Detected lcore 29 as core 5 on socket 1 00:03:28.811 EAL: Detected lcore 30 as core 6 on socket 1 00:03:28.811 EAL: Detected lcore 31 as core 9 on socket 1 00:03:28.811 EAL: Detected lcore 32 as core 10 on socket 1 00:03:28.811 EAL: Detected lcore 33 as core 11 on socket 1 00:03:28.811 EAL: Detected lcore 34 as core 12 on socket 1 00:03:28.811 EAL: Detected lcore 35 as core 13 on socket 1 00:03:28.811 EAL: Detected lcore 36 as core 16 on socket 1 00:03:28.811 EAL: Detected lcore 37 as core 17 on socket 1 00:03:28.811 EAL: Detected lcore 38 as core 18 on socket 1 00:03:28.812 EAL: Detected lcore 39 as core 19 on socket 1 00:03:28.812 EAL: Detected lcore 40 as core 20 on socket 1 00:03:28.812 EAL: Detected lcore 41 as core 21 on socket 1 00:03:28.812 EAL: Detected lcore 42 as core 24 on socket 1 00:03:28.812 EAL: Detected lcore 43 as core 25 on socket 1 00:03:28.812 EAL: Detected lcore 44 as core 26 on socket 1 00:03:28.812 EAL: Detected lcore 45 as core 27 on socket 1 00:03:28.812 EAL: Detected lcore 46 as core 28 on socket 1 00:03:28.812 EAL: Detected lcore 47 as core 29 on socket 1 00:03:28.812 EAL: Detected lcore 48 as core 0 on socket 0 00:03:28.812 EAL: Detected lcore 49 as core 1 on socket 0 00:03:28.812 EAL: Detected lcore 50 as core 2 on socket 0 00:03:28.812 EAL: Detected lcore 51 as core 3 on socket 0 00:03:28.812 EAL: Detected lcore 52 as core 4 on socket 0 00:03:28.812 EAL: Detected lcore 53 as core 5 on socket 0 00:03:28.812 EAL: Detected lcore 54 as core 6 on socket 0 00:03:28.812 EAL: Detected lcore 55 as core 8 on socket 0 00:03:28.812 EAL: Detected lcore 56 as core 9 on socket 0 00:03:28.812 EAL: Detected lcore 57 as core 10 on socket 0 00:03:28.812 EAL: Detected lcore 58 as core 11 on socket 0 00:03:28.812 EAL: Detected lcore 59 as core 12 on socket 0 00:03:28.812 EAL: Detected lcore 60 as core 13 on socket 0 00:03:28.812 EAL: Detected lcore 61 as core 16 on socket 0 00:03:28.812 EAL: Detected lcore 62 as core 17 on socket 0 00:03:28.812 EAL: Detected lcore 63 as core 18 on socket 0 00:03:28.812 EAL: Detected lcore 64 as core 19 on socket 0 00:03:28.812 EAL: Detected lcore 65 as core 20 on socket 0 00:03:28.812 EAL: Detected lcore 66 as core 21 on socket 0 00:03:28.812 EAL: Detected lcore 67 as core 25 on socket 0 00:03:28.812 EAL: Detected lcore 68 as core 26 on socket 0 00:03:28.812 EAL: Detected lcore 69 as core 27 on socket 0 00:03:28.812 EAL: Detected lcore 70 as core 28 on socket 0 00:03:28.812 EAL: Detected lcore 71 as core 29 on socket 0 00:03:28.812 EAL: Detected lcore 72 as core 0 on socket 1 00:03:28.812 EAL: Detected lcore 73 as core 1 on socket 1 00:03:28.812 EAL: Detected lcore 74 as core 2 on socket 1 00:03:28.812 EAL: Detected lcore 75 as core 3 on socket 1 00:03:28.812 EAL: Detected lcore 76 as core 4 on socket 1 00:03:28.812 EAL: Detected lcore 77 as core 5 on socket 1 00:03:28.812 EAL: Detected lcore 78 as core 6 on socket 1 00:03:28.812 EAL: Detected lcore 79 as core 9 on socket 1 00:03:28.812 EAL: Detected lcore 80 as core 10 on socket 1 00:03:28.812 EAL: Detected lcore 81 as core 11 on socket 1 00:03:28.812 EAL: Detected lcore 82 as core 12 on socket 1 00:03:28.812 EAL: Detected lcore 83 as core 13 on socket 1 00:03:28.812 EAL: Detected lcore 84 as core 16 on socket 1 00:03:28.812 EAL: Detected lcore 85 as core 17 on socket 1 00:03:28.812 EAL: Detected lcore 86 as core 18 on socket 1 00:03:28.812 EAL: Detected lcore 87 as core 19 on socket 1 00:03:28.812 EAL: Detected lcore 88 as core 20 on socket 1 00:03:28.812 EAL: Detected lcore 89 as core 21 on socket 1 00:03:28.812 EAL: Detected lcore 90 as core 24 on socket 1 00:03:28.812 EAL: Detected lcore 91 as core 25 on socket 1 00:03:28.812 EAL: Detected lcore 92 as core 26 on socket 1 00:03:28.812 EAL: Detected lcore 93 as core 27 on socket 1 00:03:28.812 EAL: Detected lcore 94 as core 28 on socket 1 00:03:28.812 EAL: Detected lcore 95 as core 29 on socket 1 00:03:28.812 EAL: Maximum logical cores by configuration: 128 00:03:28.812 EAL: Detected CPU lcores: 96 00:03:28.812 EAL: Detected NUMA nodes: 2 00:03:28.812 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:28.812 EAL: Detected shared linkage of DPDK 00:03:28.812 EAL: No shared files mode enabled, IPC will be disabled 00:03:28.812 EAL: Bus pci wants IOVA as 'DC' 00:03:28.812 EAL: Buses did not request a specific IOVA mode. 00:03:28.812 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:28.812 EAL: Selected IOVA mode 'VA' 00:03:28.812 EAL: Probing VFIO support... 00:03:28.812 EAL: IOMMU type 1 (Type 1) is supported 00:03:28.812 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:28.812 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:28.812 EAL: VFIO support initialized 00:03:28.812 EAL: Ask a virtual area of 0x2e000 bytes 00:03:28.812 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:28.812 EAL: Setting up physically contiguous memory... 00:03:28.812 EAL: Setting maximum number of open files to 524288 00:03:28.812 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:28.812 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:28.812 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:28.812 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:28.812 EAL: Ask a virtual area of 0x61000 bytes 00:03:28.812 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:28.812 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:28.812 EAL: Ask a virtual area of 0x400000000 bytes 00:03:28.812 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:28.812 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:28.812 EAL: Hugepages will be freed exactly as allocated. 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: TSC frequency is ~2300000 KHz 00:03:28.812 EAL: Main lcore 0 is ready (tid=7f08e0310a00;cpuset=[0]) 00:03:28.812 EAL: Trying to obtain current memory policy. 00:03:28.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.812 EAL: Restoring previous memory policy: 0 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was expanded by 2MB 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:28.812 EAL: Mem event callback 'spdk:(nil)' registered 00:03:28.812 00:03:28.812 00:03:28.812 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.812 http://cunit.sourceforge.net/ 00:03:28.812 00:03:28.812 00:03:28.812 Suite: components_suite 00:03:28.812 Test: vtophys_malloc_test ...passed 00:03:28.812 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:28.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.812 EAL: Restoring previous memory policy: 4 00:03:28.812 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was expanded by 4MB 00:03:28.812 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was shrunk by 4MB 00:03:28.812 EAL: Trying to obtain current memory policy. 00:03:28.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.812 EAL: Restoring previous memory policy: 4 00:03:28.812 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was expanded by 6MB 00:03:28.812 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was shrunk by 6MB 00:03:28.812 EAL: Trying to obtain current memory policy. 00:03:28.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.812 EAL: Restoring previous memory policy: 4 00:03:28.812 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was expanded by 10MB 00:03:28.812 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.812 EAL: request: mp_malloc_sync 00:03:28.812 EAL: No shared files mode enabled, IPC is disabled 00:03:28.812 EAL: Heap on socket 0 was shrunk by 10MB 00:03:28.812 EAL: Trying to obtain current memory policy. 00:03:28.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.813 EAL: Restoring previous memory policy: 4 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was expanded by 18MB 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was shrunk by 18MB 00:03:28.813 EAL: Trying to obtain current memory policy. 00:03:28.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.813 EAL: Restoring previous memory policy: 4 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was expanded by 34MB 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was shrunk by 34MB 00:03:28.813 EAL: Trying to obtain current memory policy. 00:03:28.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.813 EAL: Restoring previous memory policy: 4 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was expanded by 66MB 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was shrunk by 66MB 00:03:28.813 EAL: Trying to obtain current memory policy. 00:03:28.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.813 EAL: Restoring previous memory policy: 4 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was expanded by 130MB 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was shrunk by 130MB 00:03:28.813 EAL: Trying to obtain current memory policy. 00:03:28.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.813 EAL: Restoring previous memory policy: 4 00:03:28.813 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.813 EAL: request: mp_malloc_sync 00:03:28.813 EAL: No shared files mode enabled, IPC is disabled 00:03:28.813 EAL: Heap on socket 0 was expanded by 258MB 00:03:29.071 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.071 EAL: request: mp_malloc_sync 00:03:29.071 EAL: No shared files mode enabled, IPC is disabled 00:03:29.071 EAL: Heap on socket 0 was shrunk by 258MB 00:03:29.072 EAL: Trying to obtain current memory policy. 00:03:29.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.072 EAL: Restoring previous memory policy: 4 00:03:29.072 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.072 EAL: request: mp_malloc_sync 00:03:29.072 EAL: No shared files mode enabled, IPC is disabled 00:03:29.072 EAL: Heap on socket 0 was expanded by 514MB 00:03:29.072 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.330 EAL: request: mp_malloc_sync 00:03:29.330 EAL: No shared files mode enabled, IPC is disabled 00:03:29.330 EAL: Heap on socket 0 was shrunk by 514MB 00:03:29.330 EAL: Trying to obtain current memory policy. 00:03:29.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.330 EAL: Restoring previous memory policy: 4 00:03:29.330 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.331 EAL: request: mp_malloc_sync 00:03:29.331 EAL: No shared files mode enabled, IPC is disabled 00:03:29.331 EAL: Heap on socket 0 was expanded by 1026MB 00:03:29.589 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.850 EAL: request: mp_malloc_sync 00:03:29.850 EAL: No shared files mode enabled, IPC is disabled 00:03:29.850 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:29.850 passed 00:03:29.850 00:03:29.850 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.850 suites 1 1 n/a 0 0 00:03:29.850 tests 2 2 2 0 0 00:03:29.850 asserts 497 497 497 0 n/a 00:03:29.850 00:03:29.850 Elapsed time = 0.976 seconds 00:03:29.850 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.850 EAL: request: mp_malloc_sync 00:03:29.850 EAL: No shared files mode enabled, IPC is disabled 00:03:29.850 EAL: Heap on socket 0 was shrunk by 2MB 00:03:29.850 EAL: No shared files mode enabled, IPC is disabled 00:03:29.850 EAL: No shared files mode enabled, IPC is disabled 00:03:29.850 EAL: No shared files mode enabled, IPC is disabled 00:03:29.850 00:03:29.850 real 0m1.106s 00:03:29.850 user 0m0.648s 00:03:29.850 sys 0m0.430s 00:03:29.850 15:56:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.850 15:56:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:29.850 ************************************ 00:03:29.850 END TEST env_vtophys 00:03:29.850 ************************************ 00:03:29.850 15:56:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:29.850 15:56:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.850 15:56:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.850 15:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.850 ************************************ 00:03:29.850 START TEST env_pci 00:03:29.850 ************************************ 00:03:29.850 15:56:30 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:29.850 00:03:29.850 00:03:29.850 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.850 http://cunit.sourceforge.net/ 00:03:29.850 00:03:29.850 00:03:29.850 Suite: pci 00:03:29.850 Test: pci_hook ...[2024-11-20 15:56:30.573824] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2536641 has claimed it 00:03:29.850 EAL: Cannot find device (10000:00:01.0) 00:03:29.850 EAL: Failed to attach device on primary process 00:03:29.850 passed 00:03:29.850 00:03:29.850 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.850 suites 1 1 n/a 0 0 00:03:29.850 tests 1 1 1 0 0 00:03:29.850 asserts 25 25 25 0 n/a 00:03:29.850 00:03:29.850 Elapsed time = 0.026 seconds 00:03:29.850 00:03:29.850 real 0m0.045s 00:03:29.850 user 0m0.017s 00:03:29.850 sys 0m0.028s 00:03:29.850 15:56:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.850 15:56:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:29.850 ************************************ 00:03:29.850 END TEST env_pci 00:03:29.850 ************************************ 00:03:29.850 15:56:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:29.850 15:56:30 env -- env/env.sh@15 -- # uname 00:03:29.850 15:56:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:29.850 15:56:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:29.850 15:56:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.850 15:56:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:29.850 15:56:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.850 15:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.850 ************************************ 00:03:29.850 START TEST env_dpdk_post_init 00:03:29.850 ************************************ 00:03:29.850 15:56:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:30.110 EAL: Detected CPU lcores: 96 00:03:30.110 EAL: Detected NUMA nodes: 2 00:03:30.110 EAL: Detected shared linkage of DPDK 00:03:30.110 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:30.110 EAL: Selected IOVA mode 'VA' 00:03:30.110 EAL: VFIO support initialized 00:03:30.110 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:30.110 EAL: Using IOMMU type 1 (Type 1) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:30.110 EAL: Ignore mapping IO port bar(1) 00:03:30.110 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:31.048 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:31.048 EAL: Ignore mapping IO port bar(1) 00:03:31.048 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:34.334 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:34.334 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:34.334 Starting DPDK initialization... 00:03:34.334 Starting SPDK post initialization... 00:03:34.334 SPDK NVMe probe 00:03:34.334 Attaching to 0000:5e:00.0 00:03:34.334 Attached to 0000:5e:00.0 00:03:34.334 Cleaning up... 00:03:34.334 00:03:34.334 real 0m4.334s 00:03:34.334 user 0m2.964s 00:03:34.334 sys 0m0.440s 00:03:34.334 15:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.334 15:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:34.334 ************************************ 00:03:34.334 END TEST env_dpdk_post_init 00:03:34.334 ************************************ 00:03:34.334 15:56:35 env -- env/env.sh@26 -- # uname 00:03:34.334 15:56:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:34.334 15:56:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:34.334 15:56:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.334 15:56:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.334 15:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.334 ************************************ 00:03:34.334 START TEST env_mem_callbacks 00:03:34.334 ************************************ 00:03:34.334 15:56:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:34.334 EAL: Detected CPU lcores: 96 00:03:34.334 EAL: Detected NUMA nodes: 2 00:03:34.334 EAL: Detected shared linkage of DPDK 00:03:34.334 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:34.334 EAL: Selected IOVA mode 'VA' 00:03:34.334 EAL: VFIO support initialized 00:03:34.334 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:34.334 00:03:34.334 00:03:34.334 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.334 http://cunit.sourceforge.net/ 00:03:34.334 00:03:34.334 00:03:34.334 Suite: memory 00:03:34.334 Test: test ... 00:03:34.334 register 0x200000200000 2097152 00:03:34.334 malloc 3145728 00:03:34.334 register 0x200000400000 4194304 00:03:34.334 buf 0x200000500000 len 3145728 PASSED 00:03:34.334 malloc 64 00:03:34.334 buf 0x2000004fff40 len 64 PASSED 00:03:34.334 malloc 4194304 00:03:34.334 register 0x200000800000 6291456 00:03:34.334 buf 0x200000a00000 len 4194304 PASSED 00:03:34.334 free 0x200000500000 3145728 00:03:34.334 free 0x2000004fff40 64 00:03:34.334 unregister 0x200000400000 4194304 PASSED 00:03:34.334 free 0x200000a00000 4194304 00:03:34.334 unregister 0x200000800000 6291456 PASSED 00:03:34.334 malloc 8388608 00:03:34.334 register 0x200000400000 10485760 00:03:34.334 buf 0x200000600000 len 8388608 PASSED 00:03:34.334 free 0x200000600000 8388608 00:03:34.334 unregister 0x200000400000 10485760 PASSED 00:03:34.334 passed 00:03:34.334 00:03:34.334 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.334 suites 1 1 n/a 0 0 00:03:34.334 tests 1 1 1 0 0 00:03:34.334 asserts 15 15 15 0 n/a 00:03:34.334 00:03:34.334 Elapsed time = 0.007 seconds 00:03:34.334 00:03:34.334 real 0m0.061s 00:03:34.334 user 0m0.022s 00:03:34.334 sys 0m0.039s 00:03:34.334 15:56:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.334 15:56:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:34.334 ************************************ 00:03:34.334 END TEST env_mem_callbacks 00:03:34.334 ************************************ 00:03:34.594 00:03:34.594 real 0m6.232s 00:03:34.594 user 0m4.033s 00:03:34.594 sys 0m1.276s 00:03:34.594 15:56:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.594 15:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.594 ************************************ 00:03:34.594 END TEST env 00:03:34.594 ************************************ 00:03:34.594 15:56:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:34.594 15:56:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.594 15:56:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.594 15:56:35 -- common/autotest_common.sh@10 -- # set +x 00:03:34.594 ************************************ 00:03:34.594 START TEST rpc 00:03:34.594 ************************************ 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:34.594 * Looking for test storage... 00:03:34.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.594 15:56:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.594 15:56:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.594 15:56:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.594 15:56:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.594 15:56:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.594 15:56:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:34.594 15:56:35 rpc -- scripts/common.sh@345 -- # : 1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.594 15:56:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.594 15:56:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@353 -- # local d=1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.594 15:56:35 rpc -- scripts/common.sh@355 -- # echo 1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.594 15:56:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@353 -- # local d=2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.594 15:56:35 rpc -- scripts/common.sh@355 -- # echo 2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.594 15:56:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.594 15:56:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.594 15:56:35 rpc -- scripts/common.sh@368 -- # return 0 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.594 --rc genhtml_branch_coverage=1 00:03:34.594 --rc genhtml_function_coverage=1 00:03:34.594 --rc genhtml_legend=1 00:03:34.594 --rc geninfo_all_blocks=1 00:03:34.594 --rc geninfo_unexecuted_blocks=1 00:03:34.594 00:03:34.594 ' 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.594 --rc genhtml_branch_coverage=1 00:03:34.594 --rc genhtml_function_coverage=1 00:03:34.594 --rc genhtml_legend=1 00:03:34.594 --rc geninfo_all_blocks=1 00:03:34.594 --rc geninfo_unexecuted_blocks=1 00:03:34.594 00:03:34.594 ' 00:03:34.594 15:56:35 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.595 --rc genhtml_branch_coverage=1 00:03:34.595 --rc genhtml_function_coverage=1 00:03:34.595 --rc genhtml_legend=1 00:03:34.595 --rc geninfo_all_blocks=1 00:03:34.595 --rc geninfo_unexecuted_blocks=1 00:03:34.595 00:03:34.595 ' 00:03:34.595 15:56:35 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.595 --rc genhtml_branch_coverage=1 00:03:34.595 --rc genhtml_function_coverage=1 00:03:34.595 --rc genhtml_legend=1 00:03:34.595 --rc geninfo_all_blocks=1 00:03:34.595 --rc geninfo_unexecuted_blocks=1 00:03:34.595 00:03:34.595 ' 00:03:34.595 15:56:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2537553 00:03:34.855 15:56:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.855 15:56:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:34.855 15:56:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2537553 00:03:34.855 15:56:35 rpc -- common/autotest_common.sh@835 -- # '[' -z 2537553 ']' 00:03:34.855 15:56:35 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.855 15:56:35 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:34.855 15:56:35 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.855 15:56:35 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:34.855 15:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.855 [2024-11-20 15:56:35.478309] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:34.855 [2024-11-20 15:56:35.478359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537553 ] 00:03:34.855 [2024-11-20 15:56:35.550882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.855 [2024-11-20 15:56:35.590319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:34.855 [2024-11-20 15:56:35.590356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2537553' to capture a snapshot of events at runtime. 00:03:34.855 [2024-11-20 15:56:35.590366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:34.855 [2024-11-20 15:56:35.590372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:34.855 [2024-11-20 15:56:35.590376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2537553 for offline analysis/debug. 00:03:34.855 [2024-11-20 15:56:35.590954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.115 15:56:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:35.116 15:56:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:35.116 15:56:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.116 15:56:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.116 15:56:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:35.116 15:56:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:35.116 15:56:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.116 15:56:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.116 15:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.116 ************************************ 00:03:35.116 START TEST rpc_integrity 00:03:35.116 ************************************ 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.116 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:35.116 { 00:03:35.116 "name": "Malloc0", 00:03:35.116 "aliases": [ 00:03:35.116 "2e2c3c95-db8a-492b-96d6-c3387c6941ab" 00:03:35.116 ], 00:03:35.116 "product_name": "Malloc disk", 00:03:35.116 "block_size": 512, 00:03:35.116 "num_blocks": 16384, 00:03:35.116 "uuid": "2e2c3c95-db8a-492b-96d6-c3387c6941ab", 00:03:35.116 "assigned_rate_limits": { 00:03:35.116 "rw_ios_per_sec": 0, 00:03:35.116 "rw_mbytes_per_sec": 0, 00:03:35.116 "r_mbytes_per_sec": 0, 00:03:35.116 "w_mbytes_per_sec": 0 00:03:35.116 }, 00:03:35.116 "claimed": false, 00:03:35.116 "zoned": false, 00:03:35.116 "supported_io_types": { 00:03:35.116 "read": true, 00:03:35.116 "write": true, 00:03:35.116 "unmap": true, 00:03:35.116 "flush": true, 00:03:35.116 "reset": true, 00:03:35.116 "nvme_admin": false, 00:03:35.116 "nvme_io": false, 00:03:35.116 "nvme_io_md": false, 00:03:35.116 "write_zeroes": true, 00:03:35.116 "zcopy": true, 00:03:35.116 "get_zone_info": false, 00:03:35.116 "zone_management": false, 00:03:35.116 "zone_append": false, 00:03:35.116 "compare": false, 00:03:35.116 "compare_and_write": false, 00:03:35.116 "abort": true, 00:03:35.116 "seek_hole": false, 00:03:35.116 "seek_data": false, 00:03:35.116 "copy": true, 00:03:35.116 "nvme_iov_md": false 00:03:35.116 }, 00:03:35.116 "memory_domains": [ 00:03:35.116 { 00:03:35.116 "dma_device_id": "system", 00:03:35.116 "dma_device_type": 1 00:03:35.116 }, 00:03:35.116 { 00:03:35.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.116 "dma_device_type": 2 00:03:35.116 } 00:03:35.116 ], 00:03:35.116 "driver_specific": {} 00:03:35.116 } 00:03:35.116 ]' 00:03:35.116 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:35.375 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:35.375 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:35.375 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.375 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.375 [2024-11-20 15:56:35.981570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:35.375 [2024-11-20 15:56:35.981600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:35.376 [2024-11-20 15:56:35.981612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa54280 00:03:35.376 [2024-11-20 15:56:35.981619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:35.376 [2024-11-20 15:56:35.982740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:35.376 [2024-11-20 15:56:35.982762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:35.376 Passthru0 00:03:35.376 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.376 15:56:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:35.376 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.376 15:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:35.376 { 00:03:35.376 "name": "Malloc0", 00:03:35.376 "aliases": [ 00:03:35.376 "2e2c3c95-db8a-492b-96d6-c3387c6941ab" 00:03:35.376 ], 00:03:35.376 "product_name": "Malloc disk", 00:03:35.376 "block_size": 512, 00:03:35.376 "num_blocks": 16384, 00:03:35.376 "uuid": "2e2c3c95-db8a-492b-96d6-c3387c6941ab", 00:03:35.376 "assigned_rate_limits": { 00:03:35.376 "rw_ios_per_sec": 0, 00:03:35.376 "rw_mbytes_per_sec": 0, 00:03:35.376 "r_mbytes_per_sec": 0, 00:03:35.376 "w_mbytes_per_sec": 0 00:03:35.376 }, 00:03:35.376 "claimed": true, 00:03:35.376 "claim_type": "exclusive_write", 00:03:35.376 "zoned": false, 00:03:35.376 "supported_io_types": { 00:03:35.376 "read": true, 00:03:35.376 "write": true, 00:03:35.376 "unmap": true, 00:03:35.376 "flush": true, 00:03:35.376 "reset": true, 00:03:35.376 "nvme_admin": false, 00:03:35.376 "nvme_io": false, 00:03:35.376 "nvme_io_md": false, 00:03:35.376 "write_zeroes": true, 00:03:35.376 "zcopy": true, 00:03:35.376 "get_zone_info": false, 00:03:35.376 "zone_management": false, 00:03:35.376 "zone_append": false, 00:03:35.376 "compare": false, 00:03:35.376 "compare_and_write": false, 00:03:35.376 "abort": true, 00:03:35.376 "seek_hole": false, 00:03:35.376 "seek_data": false, 00:03:35.376 "copy": true, 00:03:35.376 "nvme_iov_md": false 00:03:35.376 }, 00:03:35.376 "memory_domains": [ 00:03:35.376 { 00:03:35.376 "dma_device_id": "system", 00:03:35.376 "dma_device_type": 1 00:03:35.376 }, 00:03:35.376 { 00:03:35.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.376 "dma_device_type": 2 00:03:35.376 } 00:03:35.376 ], 00:03:35.376 "driver_specific": {} 00:03:35.376 }, 00:03:35.376 { 00:03:35.376 "name": "Passthru0", 00:03:35.376 "aliases": [ 00:03:35.376 "6c1f9b26-832e-58f3-bb5f-60ea36bace9a" 00:03:35.376 ], 00:03:35.376 "product_name": "passthru", 00:03:35.376 "block_size": 512, 00:03:35.376 "num_blocks": 16384, 00:03:35.376 "uuid": "6c1f9b26-832e-58f3-bb5f-60ea36bace9a", 00:03:35.376 "assigned_rate_limits": { 00:03:35.376 "rw_ios_per_sec": 0, 00:03:35.376 "rw_mbytes_per_sec": 0, 00:03:35.376 "r_mbytes_per_sec": 0, 00:03:35.376 "w_mbytes_per_sec": 0 00:03:35.376 }, 00:03:35.376 "claimed": false, 00:03:35.376 "zoned": false, 00:03:35.376 "supported_io_types": { 00:03:35.376 "read": true, 00:03:35.376 "write": true, 00:03:35.376 "unmap": true, 00:03:35.376 "flush": true, 00:03:35.376 "reset": true, 00:03:35.376 "nvme_admin": false, 00:03:35.376 "nvme_io": false, 00:03:35.376 "nvme_io_md": false, 00:03:35.376 "write_zeroes": true, 00:03:35.376 "zcopy": true, 00:03:35.376 "get_zone_info": false, 00:03:35.376 "zone_management": false, 00:03:35.376 "zone_append": false, 00:03:35.376 "compare": false, 00:03:35.376 "compare_and_write": false, 00:03:35.376 "abort": true, 00:03:35.376 "seek_hole": false, 00:03:35.376 "seek_data": false, 00:03:35.376 "copy": true, 00:03:35.376 "nvme_iov_md": false 00:03:35.376 }, 00:03:35.376 "memory_domains": [ 00:03:35.376 { 00:03:35.376 "dma_device_id": "system", 00:03:35.376 "dma_device_type": 1 00:03:35.376 }, 00:03:35.376 { 00:03:35.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.376 "dma_device_type": 2 00:03:35.376 } 00:03:35.376 ], 00:03:35.376 "driver_specific": { 00:03:35.376 "passthru": { 00:03:35.376 "name": "Passthru0", 00:03:35.376 "base_bdev_name": "Malloc0" 00:03:35.376 } 00:03:35.376 } 00:03:35.376 } 00:03:35.376 ]' 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:35.376 15:56:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:35.376 00:03:35.376 real 0m0.274s 00:03:35.376 user 0m0.162s 00:03:35.376 sys 0m0.044s 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.376 15:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 ************************************ 00:03:35.376 END TEST rpc_integrity 00:03:35.376 ************************************ 00:03:35.376 15:56:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:35.376 15:56:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.376 15:56:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.376 15:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 ************************************ 00:03:35.376 START TEST rpc_plugins 00:03:35.376 ************************************ 00:03:35.376 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:35.376 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:35.376 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.376 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.376 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:35.376 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:35.636 { 00:03:35.636 "name": "Malloc1", 00:03:35.636 "aliases": [ 00:03:35.636 "d8576817-04bc-4b1c-b294-db385a9e8d98" 00:03:35.636 ], 00:03:35.636 "product_name": "Malloc disk", 00:03:35.636 "block_size": 4096, 00:03:35.636 "num_blocks": 256, 00:03:35.636 "uuid": "d8576817-04bc-4b1c-b294-db385a9e8d98", 00:03:35.636 "assigned_rate_limits": { 00:03:35.636 "rw_ios_per_sec": 0, 00:03:35.636 "rw_mbytes_per_sec": 0, 00:03:35.636 "r_mbytes_per_sec": 0, 00:03:35.636 "w_mbytes_per_sec": 0 00:03:35.636 }, 00:03:35.636 "claimed": false, 00:03:35.636 "zoned": false, 00:03:35.636 "supported_io_types": { 00:03:35.636 "read": true, 00:03:35.636 "write": true, 00:03:35.636 "unmap": true, 00:03:35.636 "flush": true, 00:03:35.636 "reset": true, 00:03:35.636 "nvme_admin": false, 00:03:35.636 "nvme_io": false, 00:03:35.636 "nvme_io_md": false, 00:03:35.636 "write_zeroes": true, 00:03:35.636 "zcopy": true, 00:03:35.636 "get_zone_info": false, 00:03:35.636 "zone_management": false, 00:03:35.636 "zone_append": false, 00:03:35.636 "compare": false, 00:03:35.636 "compare_and_write": false, 00:03:35.636 "abort": true, 00:03:35.636 "seek_hole": false, 00:03:35.636 "seek_data": false, 00:03:35.636 "copy": true, 00:03:35.636 "nvme_iov_md": false 00:03:35.636 }, 00:03:35.636 "memory_domains": [ 00:03:35.636 { 00:03:35.636 "dma_device_id": "system", 00:03:35.636 "dma_device_type": 1 00:03:35.636 }, 00:03:35.636 { 00:03:35.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.636 "dma_device_type": 2 00:03:35.636 } 00:03:35.636 ], 00:03:35.636 "driver_specific": {} 00:03:35.636 } 00:03:35.636 ]' 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:35.636 15:56:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:35.636 00:03:35.636 real 0m0.148s 00:03:35.636 user 0m0.092s 00:03:35.636 sys 0m0.017s 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.636 15:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.636 ************************************ 00:03:35.636 END TEST rpc_plugins 00:03:35.636 ************************************ 00:03:35.636 15:56:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:35.636 15:56:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.636 15:56:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.636 15:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.636 ************************************ 00:03:35.636 START TEST rpc_trace_cmd_test 00:03:35.636 ************************************ 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.636 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:35.636 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2537553", 00:03:35.636 "tpoint_group_mask": "0x8", 00:03:35.636 "iscsi_conn": { 00:03:35.636 "mask": "0x2", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "scsi": { 00:03:35.636 "mask": "0x4", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "bdev": { 00:03:35.636 "mask": "0x8", 00:03:35.636 "tpoint_mask": "0xffffffffffffffff" 00:03:35.636 }, 00:03:35.636 "nvmf_rdma": { 00:03:35.636 "mask": "0x10", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "nvmf_tcp": { 00:03:35.636 "mask": "0x20", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "ftl": { 00:03:35.636 "mask": "0x40", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "blobfs": { 00:03:35.636 "mask": "0x80", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "dsa": { 00:03:35.636 "mask": "0x200", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "thread": { 00:03:35.636 "mask": "0x400", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "nvme_pcie": { 00:03:35.636 "mask": "0x800", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.636 "iaa": { 00:03:35.636 "mask": "0x1000", 00:03:35.636 "tpoint_mask": "0x0" 00:03:35.636 }, 00:03:35.637 "nvme_tcp": { 00:03:35.637 "mask": "0x2000", 00:03:35.637 "tpoint_mask": "0x0" 00:03:35.637 }, 00:03:35.637 "bdev_nvme": { 00:03:35.637 "mask": "0x4000", 00:03:35.637 "tpoint_mask": "0x0" 00:03:35.637 }, 00:03:35.637 "sock": { 00:03:35.637 "mask": "0x8000", 00:03:35.637 "tpoint_mask": "0x0" 00:03:35.637 }, 00:03:35.637 "blob": { 00:03:35.637 "mask": "0x10000", 00:03:35.637 "tpoint_mask": "0x0" 00:03:35.637 }, 00:03:35.637 "bdev_raid": { 00:03:35.637 "mask": "0x20000", 00:03:35.637 "tpoint_mask": "0x0" 00:03:35.637 }, 00:03:35.637 "scheduler": { 00:03:35.637 "mask": "0x40000", 00:03:35.637 "tpoint_mask": "0x0" 00:03:35.637 } 00:03:35.637 }' 00:03:35.637 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:35.637 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:35.637 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:35.896 00:03:35.896 real 0m0.198s 00:03:35.896 user 0m0.168s 00:03:35.896 sys 0m0.022s 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.896 15:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:35.896 ************************************ 00:03:35.896 END TEST rpc_trace_cmd_test 00:03:35.896 ************************************ 00:03:35.896 15:56:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:35.896 15:56:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:35.896 15:56:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:35.896 15:56:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.896 15:56:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.896 15:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.896 ************************************ 00:03:35.896 START TEST rpc_daemon_integrity 00:03:35.896 ************************************ 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.896 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:36.156 { 00:03:36.156 "name": "Malloc2", 00:03:36.156 "aliases": [ 00:03:36.156 "df160109-2ef5-47fe-9c87-3bae51cc7f54" 00:03:36.156 ], 00:03:36.156 "product_name": "Malloc disk", 00:03:36.156 "block_size": 512, 00:03:36.156 "num_blocks": 16384, 00:03:36.156 "uuid": "df160109-2ef5-47fe-9c87-3bae51cc7f54", 00:03:36.156 "assigned_rate_limits": { 00:03:36.156 "rw_ios_per_sec": 0, 00:03:36.156 "rw_mbytes_per_sec": 0, 00:03:36.156 "r_mbytes_per_sec": 0, 00:03:36.156 "w_mbytes_per_sec": 0 00:03:36.156 }, 00:03:36.156 "claimed": false, 00:03:36.156 "zoned": false, 00:03:36.156 "supported_io_types": { 00:03:36.156 "read": true, 00:03:36.156 "write": true, 00:03:36.156 "unmap": true, 00:03:36.156 "flush": true, 00:03:36.156 "reset": true, 00:03:36.156 "nvme_admin": false, 00:03:36.156 "nvme_io": false, 00:03:36.156 "nvme_io_md": false, 00:03:36.156 "write_zeroes": true, 00:03:36.156 "zcopy": true, 00:03:36.156 "get_zone_info": false, 00:03:36.156 "zone_management": false, 00:03:36.156 "zone_append": false, 00:03:36.156 "compare": false, 00:03:36.156 "compare_and_write": false, 00:03:36.156 "abort": true, 00:03:36.156 "seek_hole": false, 00:03:36.156 "seek_data": false, 00:03:36.156 "copy": true, 00:03:36.156 "nvme_iov_md": false 00:03:36.156 }, 00:03:36.156 "memory_domains": [ 00:03:36.156 { 00:03:36.156 "dma_device_id": "system", 00:03:36.156 "dma_device_type": 1 00:03:36.156 }, 00:03:36.156 { 00:03:36.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:36.156 "dma_device_type": 2 00:03:36.156 } 00:03:36.156 ], 00:03:36.156 "driver_specific": {} 00:03:36.156 } 00:03:36.156 ]' 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.156 [2024-11-20 15:56:36.795802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:36.156 [2024-11-20 15:56:36.795832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:36.156 [2024-11-20 15:56:36.795844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa56150 00:03:36.156 [2024-11-20 15:56:36.795851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:36.156 [2024-11-20 15:56:36.796860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:36.156 [2024-11-20 15:56:36.796880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:36.156 Passthru0 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.156 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:36.156 { 00:03:36.156 "name": "Malloc2", 00:03:36.156 "aliases": [ 00:03:36.156 "df160109-2ef5-47fe-9c87-3bae51cc7f54" 00:03:36.156 ], 00:03:36.156 "product_name": "Malloc disk", 00:03:36.156 "block_size": 512, 00:03:36.157 "num_blocks": 16384, 00:03:36.157 "uuid": "df160109-2ef5-47fe-9c87-3bae51cc7f54", 00:03:36.157 "assigned_rate_limits": { 00:03:36.157 "rw_ios_per_sec": 0, 00:03:36.157 "rw_mbytes_per_sec": 0, 00:03:36.157 "r_mbytes_per_sec": 0, 00:03:36.157 "w_mbytes_per_sec": 0 00:03:36.157 }, 00:03:36.157 "claimed": true, 00:03:36.157 "claim_type": "exclusive_write", 00:03:36.157 "zoned": false, 00:03:36.157 "supported_io_types": { 00:03:36.157 "read": true, 00:03:36.157 "write": true, 00:03:36.157 "unmap": true, 00:03:36.157 "flush": true, 00:03:36.157 "reset": true, 00:03:36.157 "nvme_admin": false, 00:03:36.157 "nvme_io": false, 00:03:36.157 "nvme_io_md": false, 00:03:36.157 "write_zeroes": true, 00:03:36.157 "zcopy": true, 00:03:36.157 "get_zone_info": false, 00:03:36.157 "zone_management": false, 00:03:36.157 "zone_append": false, 00:03:36.157 "compare": false, 00:03:36.157 "compare_and_write": false, 00:03:36.157 "abort": true, 00:03:36.157 "seek_hole": false, 00:03:36.157 "seek_data": false, 00:03:36.157 "copy": true, 00:03:36.157 "nvme_iov_md": false 00:03:36.157 }, 00:03:36.157 "memory_domains": [ 00:03:36.157 { 00:03:36.157 "dma_device_id": "system", 00:03:36.157 "dma_device_type": 1 00:03:36.157 }, 00:03:36.157 { 00:03:36.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:36.157 "dma_device_type": 2 00:03:36.157 } 00:03:36.157 ], 00:03:36.157 "driver_specific": {} 00:03:36.157 }, 00:03:36.157 { 00:03:36.157 "name": "Passthru0", 00:03:36.157 "aliases": [ 00:03:36.157 "2cd92f17-21a7-59ca-89aa-9574fb94452c" 00:03:36.157 ], 00:03:36.157 "product_name": "passthru", 00:03:36.157 "block_size": 512, 00:03:36.157 "num_blocks": 16384, 00:03:36.157 "uuid": "2cd92f17-21a7-59ca-89aa-9574fb94452c", 00:03:36.157 "assigned_rate_limits": { 00:03:36.157 "rw_ios_per_sec": 0, 00:03:36.157 "rw_mbytes_per_sec": 0, 00:03:36.157 "r_mbytes_per_sec": 0, 00:03:36.157 "w_mbytes_per_sec": 0 00:03:36.157 }, 00:03:36.157 "claimed": false, 00:03:36.157 "zoned": false, 00:03:36.157 "supported_io_types": { 00:03:36.157 "read": true, 00:03:36.157 "write": true, 00:03:36.157 "unmap": true, 00:03:36.157 "flush": true, 00:03:36.157 "reset": true, 00:03:36.157 "nvme_admin": false, 00:03:36.157 "nvme_io": false, 00:03:36.157 "nvme_io_md": false, 00:03:36.157 "write_zeroes": true, 00:03:36.157 "zcopy": true, 00:03:36.157 "get_zone_info": false, 00:03:36.157 "zone_management": false, 00:03:36.157 "zone_append": false, 00:03:36.157 "compare": false, 00:03:36.157 "compare_and_write": false, 00:03:36.157 "abort": true, 00:03:36.157 "seek_hole": false, 00:03:36.157 "seek_data": false, 00:03:36.157 "copy": true, 00:03:36.157 "nvme_iov_md": false 00:03:36.157 }, 00:03:36.157 "memory_domains": [ 00:03:36.157 { 00:03:36.157 "dma_device_id": "system", 00:03:36.157 "dma_device_type": 1 00:03:36.157 }, 00:03:36.157 { 00:03:36.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:36.157 "dma_device_type": 2 00:03:36.157 } 00:03:36.157 ], 00:03:36.157 "driver_specific": { 00:03:36.157 "passthru": { 00:03:36.157 "name": "Passthru0", 00:03:36.157 "base_bdev_name": "Malloc2" 00:03:36.157 } 00:03:36.157 } 00:03:36.157 } 00:03:36.157 ]' 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:36.157 00:03:36.157 real 0m0.260s 00:03:36.157 user 0m0.157s 00:03:36.157 sys 0m0.040s 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.157 15:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.157 ************************************ 00:03:36.157 END TEST rpc_daemon_integrity 00:03:36.157 ************************************ 00:03:36.157 15:56:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:36.157 15:56:36 rpc -- rpc/rpc.sh@84 -- # killprocess 2537553 00:03:36.157 15:56:36 rpc -- common/autotest_common.sh@954 -- # '[' -z 2537553 ']' 00:03:36.157 15:56:36 rpc -- common/autotest_common.sh@958 -- # kill -0 2537553 00:03:36.157 15:56:36 rpc -- common/autotest_common.sh@959 -- # uname 00:03:36.157 15:56:36 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.157 15:56:36 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2537553 00:03:36.417 15:56:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.417 15:56:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.417 15:56:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2537553' 00:03:36.417 killing process with pid 2537553 00:03:36.417 15:56:37 rpc -- common/autotest_common.sh@973 -- # kill 2537553 00:03:36.417 15:56:37 rpc -- common/autotest_common.sh@978 -- # wait 2537553 00:03:36.675 00:03:36.675 real 0m2.068s 00:03:36.675 user 0m2.584s 00:03:36.675 sys 0m0.706s 00:03:36.675 15:56:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.675 15:56:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.675 ************************************ 00:03:36.675 END TEST rpc 00:03:36.675 ************************************ 00:03:36.675 15:56:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:36.675 15:56:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.675 15:56:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.675 15:56:37 -- common/autotest_common.sh@10 -- # set +x 00:03:36.675 ************************************ 00:03:36.675 START TEST skip_rpc 00:03:36.675 ************************************ 00:03:36.675 15:56:37 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:36.675 * Looking for test storage... 00:03:36.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:36.675 15:56:37 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.675 15:56:37 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.675 15:56:37 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.935 15:56:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.935 --rc genhtml_branch_coverage=1 00:03:36.935 --rc genhtml_function_coverage=1 00:03:36.935 --rc genhtml_legend=1 00:03:36.935 --rc geninfo_all_blocks=1 00:03:36.935 --rc geninfo_unexecuted_blocks=1 00:03:36.935 00:03:36.935 ' 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.935 --rc genhtml_branch_coverage=1 00:03:36.935 --rc genhtml_function_coverage=1 00:03:36.935 --rc genhtml_legend=1 00:03:36.935 --rc geninfo_all_blocks=1 00:03:36.935 --rc geninfo_unexecuted_blocks=1 00:03:36.935 00:03:36.935 ' 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.935 --rc genhtml_branch_coverage=1 00:03:36.935 --rc genhtml_function_coverage=1 00:03:36.935 --rc genhtml_legend=1 00:03:36.935 --rc geninfo_all_blocks=1 00:03:36.935 --rc geninfo_unexecuted_blocks=1 00:03:36.935 00:03:36.935 ' 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.935 --rc genhtml_branch_coverage=1 00:03:36.935 --rc genhtml_function_coverage=1 00:03:36.935 --rc genhtml_legend=1 00:03:36.935 --rc geninfo_all_blocks=1 00:03:36.935 --rc geninfo_unexecuted_blocks=1 00:03:36.935 00:03:36.935 ' 00:03:36.935 15:56:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.935 15:56:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:36.935 15:56:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.935 15:56:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.935 ************************************ 00:03:36.935 START TEST skip_rpc 00:03:36.935 ************************************ 00:03:36.935 15:56:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:36.935 15:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:36.935 15:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2538188 00:03:36.935 15:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.935 15:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:36.935 [2024-11-20 15:56:37.638883] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:36.935 [2024-11-20 15:56:37.638919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538188 ] 00:03:36.935 [2024-11-20 15:56:37.713543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.935 [2024-11-20 15:56:37.754297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2538188 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2538188 ']' 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2538188 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538188 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538188' 00:03:42.312 killing process with pid 2538188 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2538188 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2538188 00:03:42.312 00:03:42.312 real 0m5.366s 00:03:42.312 user 0m5.134s 00:03:42.312 sys 0m0.269s 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.312 15:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.312 ************************************ 00:03:42.312 END TEST skip_rpc 00:03:42.312 ************************************ 00:03:42.312 15:56:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:42.312 15:56:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.312 15:56:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.312 15:56:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.312 ************************************ 00:03:42.312 START TEST skip_rpc_with_json 00:03:42.312 ************************************ 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2539124 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2539124 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2539124 ']' 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.312 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.312 [2024-11-20 15:56:43.079583] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:42.312 [2024-11-20 15:56:43.079622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539124 ] 00:03:42.571 [2024-11-20 15:56:43.155453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.571 [2024-11-20 15:56:43.198126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 [2024-11-20 15:56:43.415833] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:42.831 request: 00:03:42.831 { 00:03:42.831 "trtype": "tcp", 00:03:42.831 "method": "nvmf_get_transports", 00:03:42.831 "req_id": 1 00:03:42.831 } 00:03:42.831 Got JSON-RPC error response 00:03:42.831 response: 00:03:42.831 { 00:03:42.831 "code": -19, 00:03:42.831 "message": "No such device" 00:03:42.831 } 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 [2024-11-20 15:56:43.427939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.831 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.831 { 00:03:42.831 "subsystems": [ 00:03:42.831 { 00:03:42.831 "subsystem": "fsdev", 00:03:42.831 "config": [ 00:03:42.831 { 00:03:42.831 "method": "fsdev_set_opts", 00:03:42.831 "params": { 00:03:42.831 "fsdev_io_pool_size": 65535, 00:03:42.831 "fsdev_io_cache_size": 256 00:03:42.831 } 00:03:42.831 } 00:03:42.831 ] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "vfio_user_target", 00:03:42.831 "config": null 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "keyring", 00:03:42.831 "config": [] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "iobuf", 00:03:42.831 "config": [ 00:03:42.831 { 00:03:42.831 "method": "iobuf_set_options", 00:03:42.831 "params": { 00:03:42.831 "small_pool_count": 8192, 00:03:42.831 "large_pool_count": 1024, 00:03:42.831 "small_bufsize": 8192, 00:03:42.831 "large_bufsize": 135168, 00:03:42.831 "enable_numa": false 00:03:42.831 } 00:03:42.831 } 00:03:42.831 ] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "sock", 00:03:42.831 "config": [ 00:03:42.831 { 00:03:42.831 "method": "sock_set_default_impl", 00:03:42.831 "params": { 00:03:42.831 "impl_name": "posix" 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "sock_impl_set_options", 00:03:42.831 "params": { 00:03:42.831 "impl_name": "ssl", 00:03:42.831 "recv_buf_size": 4096, 00:03:42.831 "send_buf_size": 4096, 00:03:42.831 "enable_recv_pipe": true, 00:03:42.831 "enable_quickack": false, 00:03:42.831 "enable_placement_id": 0, 00:03:42.831 "enable_zerocopy_send_server": true, 00:03:42.831 "enable_zerocopy_send_client": false, 00:03:42.831 "zerocopy_threshold": 0, 00:03:42.831 "tls_version": 0, 00:03:42.831 "enable_ktls": false 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "sock_impl_set_options", 00:03:42.831 "params": { 00:03:42.831 "impl_name": "posix", 00:03:42.831 "recv_buf_size": 2097152, 00:03:42.831 "send_buf_size": 2097152, 00:03:42.831 "enable_recv_pipe": true, 00:03:42.831 "enable_quickack": false, 00:03:42.831 "enable_placement_id": 0, 00:03:42.831 "enable_zerocopy_send_server": true, 00:03:42.831 "enable_zerocopy_send_client": false, 00:03:42.831 "zerocopy_threshold": 0, 00:03:42.831 "tls_version": 0, 00:03:42.831 "enable_ktls": false 00:03:42.831 } 00:03:42.831 } 00:03:42.831 ] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "vmd", 00:03:42.831 "config": [] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "accel", 00:03:42.831 "config": [ 00:03:42.831 { 00:03:42.831 "method": "accel_set_options", 00:03:42.831 "params": { 00:03:42.831 "small_cache_size": 128, 00:03:42.831 "large_cache_size": 16, 00:03:42.831 "task_count": 2048, 00:03:42.831 "sequence_count": 2048, 00:03:42.831 "buf_count": 2048 00:03:42.831 } 00:03:42.831 } 00:03:42.831 ] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "bdev", 00:03:42.831 "config": [ 00:03:42.831 { 00:03:42.831 "method": "bdev_set_options", 00:03:42.831 "params": { 00:03:42.831 "bdev_io_pool_size": 65535, 00:03:42.831 "bdev_io_cache_size": 256, 00:03:42.831 "bdev_auto_examine": true, 00:03:42.831 "iobuf_small_cache_size": 128, 00:03:42.831 "iobuf_large_cache_size": 16 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "bdev_raid_set_options", 00:03:42.831 "params": { 00:03:42.831 "process_window_size_kb": 1024, 00:03:42.831 "process_max_bandwidth_mb_sec": 0 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "bdev_iscsi_set_options", 00:03:42.831 "params": { 00:03:42.831 "timeout_sec": 30 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "bdev_nvme_set_options", 00:03:42.831 "params": { 00:03:42.831 "action_on_timeout": "none", 00:03:42.831 "timeout_us": 0, 00:03:42.831 "timeout_admin_us": 0, 00:03:42.831 "keep_alive_timeout_ms": 10000, 00:03:42.831 "arbitration_burst": 0, 00:03:42.831 "low_priority_weight": 0, 00:03:42.831 "medium_priority_weight": 0, 00:03:42.831 "high_priority_weight": 0, 00:03:42.831 "nvme_adminq_poll_period_us": 10000, 00:03:42.831 "nvme_ioq_poll_period_us": 0, 00:03:42.831 "io_queue_requests": 0, 00:03:42.831 "delay_cmd_submit": true, 00:03:42.831 "transport_retry_count": 4, 00:03:42.831 "bdev_retry_count": 3, 00:03:42.831 "transport_ack_timeout": 0, 00:03:42.831 "ctrlr_loss_timeout_sec": 0, 00:03:42.831 "reconnect_delay_sec": 0, 00:03:42.831 "fast_io_fail_timeout_sec": 0, 00:03:42.831 "disable_auto_failback": false, 00:03:42.831 "generate_uuids": false, 00:03:42.831 "transport_tos": 0, 00:03:42.831 "nvme_error_stat": false, 00:03:42.831 "rdma_srq_size": 0, 00:03:42.831 "io_path_stat": false, 00:03:42.831 "allow_accel_sequence": false, 00:03:42.831 "rdma_max_cq_size": 0, 00:03:42.831 "rdma_cm_event_timeout_ms": 0, 00:03:42.831 "dhchap_digests": [ 00:03:42.831 "sha256", 00:03:42.831 "sha384", 00:03:42.831 "sha512" 00:03:42.831 ], 00:03:42.831 "dhchap_dhgroups": [ 00:03:42.831 "null", 00:03:42.831 "ffdhe2048", 00:03:42.831 "ffdhe3072", 00:03:42.831 "ffdhe4096", 00:03:42.831 "ffdhe6144", 00:03:42.831 "ffdhe8192" 00:03:42.831 ] 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "bdev_nvme_set_hotplug", 00:03:42.831 "params": { 00:03:42.831 "period_us": 100000, 00:03:42.831 "enable": false 00:03:42.831 } 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "method": "bdev_wait_for_examine" 00:03:42.831 } 00:03:42.831 ] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "scsi", 00:03:42.831 "config": null 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "scheduler", 00:03:42.831 "config": [ 00:03:42.831 { 00:03:42.831 "method": "framework_set_scheduler", 00:03:42.831 "params": { 00:03:42.831 "name": "static" 00:03:42.831 } 00:03:42.831 } 00:03:42.831 ] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "vhost_scsi", 00:03:42.831 "config": [] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "vhost_blk", 00:03:42.831 "config": [] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "ublk", 00:03:42.831 "config": [] 00:03:42.831 }, 00:03:42.831 { 00:03:42.831 "subsystem": "nbd", 00:03:42.831 "config": [] 00:03:42.831 }, 00:03:42.832 { 00:03:42.832 "subsystem": "nvmf", 00:03:42.832 "config": [ 00:03:42.832 { 00:03:42.832 "method": "nvmf_set_config", 00:03:42.832 "params": { 00:03:42.832 "discovery_filter": "match_any", 00:03:42.832 "admin_cmd_passthru": { 00:03:42.832 "identify_ctrlr": false 00:03:42.832 }, 00:03:42.832 "dhchap_digests": [ 00:03:42.832 "sha256", 00:03:42.832 "sha384", 00:03:42.832 "sha512" 00:03:42.832 ], 00:03:42.832 "dhchap_dhgroups": [ 00:03:42.832 "null", 00:03:42.832 "ffdhe2048", 00:03:42.832 "ffdhe3072", 00:03:42.832 "ffdhe4096", 00:03:42.832 "ffdhe6144", 00:03:42.832 "ffdhe8192" 00:03:42.832 ] 00:03:42.832 } 00:03:42.832 }, 00:03:42.832 { 00:03:42.832 "method": "nvmf_set_max_subsystems", 00:03:42.832 "params": { 00:03:42.832 "max_subsystems": 1024 00:03:42.832 } 00:03:42.832 }, 00:03:42.832 { 00:03:42.832 "method": "nvmf_set_crdt", 00:03:42.832 "params": { 00:03:42.832 "crdt1": 0, 00:03:42.832 "crdt2": 0, 00:03:42.832 "crdt3": 0 00:03:42.832 } 00:03:42.832 }, 00:03:42.832 { 00:03:42.832 "method": "nvmf_create_transport", 00:03:42.832 "params": { 00:03:42.832 "trtype": "TCP", 00:03:42.832 "max_queue_depth": 128, 00:03:42.832 "max_io_qpairs_per_ctrlr": 127, 00:03:42.832 "in_capsule_data_size": 4096, 00:03:42.832 "max_io_size": 131072, 00:03:42.832 "io_unit_size": 131072, 00:03:42.832 "max_aq_depth": 128, 00:03:42.832 "num_shared_buffers": 511, 00:03:42.832 "buf_cache_size": 4294967295, 00:03:42.832 "dif_insert_or_strip": false, 00:03:42.832 "zcopy": false, 00:03:42.832 "c2h_success": true, 00:03:42.832 "sock_priority": 0, 00:03:42.832 "abort_timeout_sec": 1, 00:03:42.832 "ack_timeout": 0, 00:03:42.832 "data_wr_pool_size": 0 00:03:42.832 } 00:03:42.832 } 00:03:42.832 ] 00:03:42.832 }, 00:03:42.832 { 00:03:42.832 "subsystem": "iscsi", 00:03:42.832 "config": [ 00:03:42.832 { 00:03:42.832 "method": "iscsi_set_options", 00:03:42.832 "params": { 00:03:42.832 "node_base": "iqn.2016-06.io.spdk", 00:03:42.832 "max_sessions": 128, 00:03:42.832 "max_connections_per_session": 2, 00:03:42.832 "max_queue_depth": 64, 00:03:42.832 "default_time2wait": 2, 00:03:42.832 "default_time2retain": 20, 00:03:42.832 "first_burst_length": 8192, 00:03:42.832 "immediate_data": true, 00:03:42.832 "allow_duplicated_isid": false, 00:03:42.832 "error_recovery_level": 0, 00:03:42.832 "nop_timeout": 60, 00:03:42.832 "nop_in_interval": 30, 00:03:42.832 "disable_chap": false, 00:03:42.832 "require_chap": false, 00:03:42.832 "mutual_chap": false, 00:03:42.832 "chap_group": 0, 00:03:42.832 "max_large_datain_per_connection": 64, 00:03:42.832 "max_r2t_per_connection": 4, 00:03:42.832 "pdu_pool_size": 36864, 00:03:42.832 "immediate_data_pool_size": 16384, 00:03:42.832 "data_out_pool_size": 2048 00:03:42.832 } 00:03:42.832 } 00:03:42.832 ] 00:03:42.832 } 00:03:42.832 ] 00:03:42.832 } 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2539124 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2539124 ']' 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2539124 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539124 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539124' 00:03:42.832 killing process with pid 2539124 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2539124 00:03:42.832 15:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2539124 00:03:43.401 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2539157 00:03:43.401 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.401 15:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:48.670 15:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2539157 00:03:48.670 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2539157 ']' 00:03:48.670 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2539157 00:03:48.670 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:48.670 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.670 15:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539157 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539157' 00:03:48.670 killing process with pid 2539157 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2539157 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2539157 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.670 00:03:48.670 real 0m6.292s 00:03:48.670 user 0m5.987s 00:03:48.670 sys 0m0.601s 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.670 ************************************ 00:03:48.670 END TEST skip_rpc_with_json 00:03:48.670 ************************************ 00:03:48.670 15:56:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:48.670 15:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.670 15:56:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.670 15:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.670 ************************************ 00:03:48.670 START TEST skip_rpc_with_delay 00:03:48.670 ************************************ 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.670 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.670 [2024-11-20 15:56:49.440917] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:48.671 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:48.671 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:48.671 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:48.671 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:48.671 00:03:48.671 real 0m0.069s 00:03:48.671 user 0m0.044s 00:03:48.671 sys 0m0.025s 00:03:48.671 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.671 15:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:48.671 ************************************ 00:03:48.671 END TEST skip_rpc_with_delay 00:03:48.671 ************************************ 00:03:48.671 15:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:48.671 15:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:48.671 15:56:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:48.671 15:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.671 15:56:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.671 15:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.930 ************************************ 00:03:48.930 START TEST exit_on_failed_rpc_init 00:03:48.930 ************************************ 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2540128 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2540128 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2540128 ']' 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.930 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.930 [2024-11-20 15:56:49.577778] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:48.930 [2024-11-20 15:56:49.577821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540128 ] 00:03:48.930 [2024-11-20 15:56:49.654857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.930 [2024-11-20 15:56:49.698598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:49.190 15:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.190 [2024-11-20 15:56:49.971470] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:49.190 [2024-11-20 15:56:49.971516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540356 ] 00:03:49.449 [2024-11-20 15:56:50.046787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.449 [2024-11-20 15:56:50.091773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:49.449 [2024-11-20 15:56:50.091830] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:49.449 [2024-11-20 15:56:50.091839] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:49.449 [2024-11-20 15:56:50.091848] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2540128 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2540128 ']' 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2540128 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540128 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.449 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.450 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540128' 00:03:49.450 killing process with pid 2540128 00:03:49.450 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2540128 00:03:49.450 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2540128 00:03:49.709 00:03:49.709 real 0m0.970s 00:03:49.709 user 0m1.028s 00:03:49.709 sys 0m0.398s 00:03:49.709 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.709 15:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.709 ************************************ 00:03:49.709 END TEST exit_on_failed_rpc_init 00:03:49.709 ************************************ 00:03:49.709 15:56:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.709 00:03:49.709 real 0m13.146s 00:03:49.709 user 0m12.415s 00:03:49.709 sys 0m1.553s 00:03:49.709 15:56:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.709 15:56:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.709 ************************************ 00:03:49.709 END TEST skip_rpc 00:03:49.709 ************************************ 00:03:49.969 15:56:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.969 15:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.969 15:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.969 15:56:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.969 ************************************ 00:03:49.969 START TEST rpc_client 00:03:49.969 ************************************ 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.969 * Looking for test storage... 00:03:49.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.969 15:56:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.969 --rc genhtml_branch_coverage=1 00:03:49.969 --rc genhtml_function_coverage=1 00:03:49.969 --rc genhtml_legend=1 00:03:49.969 --rc geninfo_all_blocks=1 00:03:49.969 --rc geninfo_unexecuted_blocks=1 00:03:49.969 00:03:49.969 ' 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.969 --rc genhtml_branch_coverage=1 00:03:49.969 --rc genhtml_function_coverage=1 00:03:49.969 --rc genhtml_legend=1 00:03:49.969 --rc geninfo_all_blocks=1 00:03:49.969 --rc geninfo_unexecuted_blocks=1 00:03:49.969 00:03:49.969 ' 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.969 --rc genhtml_branch_coverage=1 00:03:49.969 --rc genhtml_function_coverage=1 00:03:49.969 --rc genhtml_legend=1 00:03:49.969 --rc geninfo_all_blocks=1 00:03:49.969 --rc geninfo_unexecuted_blocks=1 00:03:49.969 00:03:49.969 ' 00:03:49.969 15:56:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.969 --rc genhtml_branch_coverage=1 00:03:49.969 --rc genhtml_function_coverage=1 00:03:49.969 --rc genhtml_legend=1 00:03:49.969 --rc geninfo_all_blocks=1 00:03:49.969 --rc geninfo_unexecuted_blocks=1 00:03:49.969 00:03:49.969 ' 00:03:49.969 15:56:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:49.969 OK 00:03:50.229 15:56:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:50.229 00:03:50.229 real 0m0.203s 00:03:50.229 user 0m0.119s 00:03:50.229 sys 0m0.096s 00:03:50.229 15:56:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.229 15:56:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:50.229 ************************************ 00:03:50.229 END TEST rpc_client 00:03:50.229 ************************************ 00:03:50.229 15:56:50 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:50.229 15:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.229 15:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.229 15:56:50 -- common/autotest_common.sh@10 -- # set +x 00:03:50.229 ************************************ 00:03:50.229 START TEST json_config 00:03:50.229 ************************************ 00:03:50.229 15:56:50 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:50.229 15:56:50 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.229 15:56:50 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.229 15:56:50 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.229 15:56:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.229 15:56:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.229 15:56:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.229 15:56:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.229 15:56:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.229 15:56:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.229 15:56:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:50.229 15:56:51 json_config -- scripts/common.sh@345 -- # : 1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.229 15:56:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.229 15:56:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@353 -- # local d=1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.229 15:56:51 json_config -- scripts/common.sh@355 -- # echo 1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.229 15:56:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@353 -- # local d=2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.229 15:56:51 json_config -- scripts/common.sh@355 -- # echo 2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.229 15:56:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.229 15:56:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.229 15:56:51 json_config -- scripts/common.sh@368 -- # return 0 00:03:50.229 15:56:51 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.229 15:56:51 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.229 --rc genhtml_branch_coverage=1 00:03:50.229 --rc genhtml_function_coverage=1 00:03:50.229 --rc genhtml_legend=1 00:03:50.229 --rc geninfo_all_blocks=1 00:03:50.229 --rc geninfo_unexecuted_blocks=1 00:03:50.229 00:03:50.229 ' 00:03:50.229 15:56:51 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.229 --rc genhtml_branch_coverage=1 00:03:50.229 --rc genhtml_function_coverage=1 00:03:50.229 --rc genhtml_legend=1 00:03:50.229 --rc geninfo_all_blocks=1 00:03:50.229 --rc geninfo_unexecuted_blocks=1 00:03:50.229 00:03:50.229 ' 00:03:50.229 15:56:51 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.229 --rc genhtml_branch_coverage=1 00:03:50.229 --rc genhtml_function_coverage=1 00:03:50.229 --rc genhtml_legend=1 00:03:50.229 --rc geninfo_all_blocks=1 00:03:50.229 --rc geninfo_unexecuted_blocks=1 00:03:50.229 00:03:50.229 ' 00:03:50.229 15:56:51 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.229 --rc genhtml_branch_coverage=1 00:03:50.229 --rc genhtml_function_coverage=1 00:03:50.229 --rc genhtml_legend=1 00:03:50.229 --rc geninfo_all_blocks=1 00:03:50.229 --rc geninfo_unexecuted_blocks=1 00:03:50.229 00:03:50.229 ' 00:03:50.229 15:56:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.229 15:56:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.229 15:56:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.230 15:56:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.230 15:56:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.230 15:56:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.230 15:56:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.230 15:56:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.230 15:56:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.230 15:56:51 json_config -- paths/export.sh@5 -- # export PATH 00:03:50.230 15:56:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@51 -- # : 0 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.230 15:56:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:50.230 INFO: JSON configuration test init 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:50.230 15:56:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.230 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 15:56:51 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:50.230 15:56:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.230 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.489 15:56:51 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:50.489 15:56:51 json_config -- json_config/common.sh@9 -- # local app=target 00:03:50.489 15:56:51 json_config -- json_config/common.sh@10 -- # shift 00:03:50.489 15:56:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:50.489 15:56:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:50.489 15:56:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:50.489 15:56:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.489 15:56:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.489 15:56:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2540569 00:03:50.489 15:56:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:50.489 Waiting for target to run... 00:03:50.489 15:56:51 json_config -- json_config/common.sh@25 -- # waitforlisten 2540569 /var/tmp/spdk_tgt.sock 00:03:50.489 15:56:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 2540569 ']' 00:03:50.489 15:56:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:50.489 15:56:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:50.489 15:56:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.489 15:56:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:50.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:50.489 15:56:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.489 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.489 [2024-11-20 15:56:51.117567] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:50.489 [2024-11-20 15:56:51.117618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540569 ] 00:03:50.749 [2024-11-20 15:56:51.408320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.749 [2024-11-20 15:56:51.442852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.317 15:56:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.317 15:56:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:51.317 15:56:51 json_config -- json_config/common.sh@26 -- # echo '' 00:03:51.317 00:03:51.317 15:56:51 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:51.318 15:56:51 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:51.318 15:56:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.318 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.318 15:56:51 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:51.318 15:56:51 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:51.318 15:56:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.318 15:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.318 15:56:51 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:51.318 15:56:51 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:51.318 15:56:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:54.607 15:56:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.607 15:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:54.607 15:56:55 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:54.608 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@54 -- # sort 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:54.608 15:56:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.608 15:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:54.608 15:56:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.608 15:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:54.608 15:56:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:54.608 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:54.867 MallocForNvmf0 00:03:54.867 15:56:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.867 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:55.125 MallocForNvmf1 00:03:55.125 15:56:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:55.126 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:55.126 [2024-11-20 15:56:55.880538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.126 15:56:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.126 15:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.384 15:56:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.384 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.642 15:56:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.642 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.642 15:56:56 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.642 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.901 [2024-11-20 15:56:56.598668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.901 15:56:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:55.901 15:56:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.901 15:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.901 15:56:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:55.901 15:56:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.901 15:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.901 15:56:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:55.901 15:56:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.901 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.159 MallocBdevForConfigChangeCheck 00:03:56.159 15:56:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:56.159 15:56:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.159 15:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.159 15:56:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:56.159 15:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.417 15:56:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:56.417 INFO: shutting down applications... 00:03:56.417 15:56:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:56.417 15:56:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:56.417 15:56:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:56.417 15:56:57 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:58.320 Calling clear_iscsi_subsystem 00:03:58.320 Calling clear_nvmf_subsystem 00:03:58.320 Calling clear_nbd_subsystem 00:03:58.320 Calling clear_ublk_subsystem 00:03:58.320 Calling clear_vhost_blk_subsystem 00:03:58.320 Calling clear_vhost_scsi_subsystem 00:03:58.320 Calling clear_bdev_subsystem 00:03:58.320 15:56:58 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:58.320 15:56:58 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:58.320 15:56:58 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:58.320 15:56:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.320 15:56:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:58.320 15:56:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:58.587 15:56:59 json_config -- json_config/json_config.sh@352 -- # break 00:03:58.587 15:56:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:58.587 15:56:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:58.587 15:56:59 json_config -- json_config/common.sh@31 -- # local app=target 00:03:58.588 15:56:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.588 15:56:59 json_config -- json_config/common.sh@35 -- # [[ -n 2540569 ]] 00:03:58.588 15:56:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2540569 00:03:58.588 15:56:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.588 15:56:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.588 15:56:59 json_config -- json_config/common.sh@41 -- # kill -0 2540569 00:03:58.588 15:56:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:59.158 15:56:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.158 15:56:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.158 15:56:59 json_config -- json_config/common.sh@41 -- # kill -0 2540569 00:03:59.158 15:56:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:59.158 15:56:59 json_config -- json_config/common.sh@43 -- # break 00:03:59.158 15:56:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:59.158 15:56:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:59.158 SPDK target shutdown done 00:03:59.158 15:56:59 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:59.158 INFO: relaunching applications... 00:03:59.158 15:56:59 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.158 15:56:59 json_config -- json_config/common.sh@9 -- # local app=target 00:03:59.158 15:56:59 json_config -- json_config/common.sh@10 -- # shift 00:03:59.158 15:56:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:59.158 15:56:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:59.158 15:56:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:59.158 15:56:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.158 15:56:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:59.158 15:56:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2542235 00:03:59.158 15:56:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:59.158 Waiting for target to run... 00:03:59.158 15:56:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.158 15:56:59 json_config -- json_config/common.sh@25 -- # waitforlisten 2542235 /var/tmp/spdk_tgt.sock 00:03:59.158 15:56:59 json_config -- common/autotest_common.sh@835 -- # '[' -z 2542235 ']' 00:03:59.158 15:56:59 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:59.158 15:56:59 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.158 15:56:59 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:59.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:59.158 15:56:59 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.158 15:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.158 [2024-11-20 15:56:59.768464] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:03:59.158 [2024-11-20 15:56:59.768524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542235 ] 00:03:59.415 [2024-11-20 15:57:00.174382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.415 [2024-11-20 15:57:00.211002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.702 [2024-11-20 15:57:03.247080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.702 [2024-11-20 15:57:03.279428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.270 15:57:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.270 15:57:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:03.270 15:57:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.270 00:04:03.270 15:57:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:03.270 15:57:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:03.270 INFO: Checking if target configuration is the same... 00:04:03.270 15:57:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.270 15:57:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:03.270 15:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.270 + '[' 2 -ne 2 ']' 00:04:03.270 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.270 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.270 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.270 +++ basename /dev/fd/62 00:04:03.270 ++ mktemp /tmp/62.XXX 00:04:03.270 + tmp_file_1=/tmp/62.Vvr 00:04:03.270 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.270 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.270 + tmp_file_2=/tmp/spdk_tgt_config.json.Ktp 00:04:03.270 + ret=0 00:04:03.270 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.529 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.788 + diff -u /tmp/62.Vvr /tmp/spdk_tgt_config.json.Ktp 00:04:03.788 + echo 'INFO: JSON config files are the same' 00:04:03.788 INFO: JSON config files are the same 00:04:03.788 + rm /tmp/62.Vvr /tmp/spdk_tgt_config.json.Ktp 00:04:03.788 + exit 0 00:04:03.788 15:57:04 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:03.788 15:57:04 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:03.788 INFO: changing configuration and checking if this can be detected... 00:04:03.788 15:57:04 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.788 15:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.788 15:57:04 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.788 15:57:04 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:03.788 15:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.788 + '[' 2 -ne 2 ']' 00:04:04.046 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:04.046 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:04.047 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.047 +++ basename /dev/fd/62 00:04:04.047 ++ mktemp /tmp/62.XXX 00:04:04.047 + tmp_file_1=/tmp/62.pQw 00:04:04.047 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.047 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:04.047 + tmp_file_2=/tmp/spdk_tgt_config.json.kFh 00:04:04.047 + ret=0 00:04:04.047 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.307 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.307 + diff -u /tmp/62.pQw /tmp/spdk_tgt_config.json.kFh 00:04:04.307 + ret=1 00:04:04.307 + echo '=== Start of file: /tmp/62.pQw ===' 00:04:04.307 + cat /tmp/62.pQw 00:04:04.307 + echo '=== End of file: /tmp/62.pQw ===' 00:04:04.307 + echo '' 00:04:04.307 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kFh ===' 00:04:04.307 + cat /tmp/spdk_tgt_config.json.kFh 00:04:04.307 + echo '=== End of file: /tmp/spdk_tgt_config.json.kFh ===' 00:04:04.307 + echo '' 00:04:04.307 + rm /tmp/62.pQw /tmp/spdk_tgt_config.json.kFh 00:04:04.307 + exit 1 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:04.307 INFO: configuration change detected. 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 2542235 ]] 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.307 15:57:05 json_config -- json_config/json_config.sh@330 -- # killprocess 2542235 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@954 -- # '[' -z 2542235 ']' 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@958 -- # kill -0 2542235 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@959 -- # uname 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542235 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542235' 00:04:04.307 killing process with pid 2542235 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@973 -- # kill 2542235 00:04:04.307 15:57:05 json_config -- common/autotest_common.sh@978 -- # wait 2542235 00:04:06.211 15:57:06 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.212 15:57:06 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:06.212 15:57:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.212 15:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.212 15:57:06 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:06.212 15:57:06 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:06.212 INFO: Success 00:04:06.212 00:04:06.212 real 0m15.775s 00:04:06.212 user 0m16.410s 00:04:06.212 sys 0m2.517s 00:04:06.212 15:57:06 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.212 15:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.212 ************************************ 00:04:06.212 END TEST json_config 00:04:06.212 ************************************ 00:04:06.212 15:57:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.212 15:57:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.212 15:57:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.212 15:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:06.212 ************************************ 00:04:06.212 START TEST json_config_extra_key 00:04:06.212 ************************************ 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.212 --rc genhtml_branch_coverage=1 00:04:06.212 --rc genhtml_function_coverage=1 00:04:06.212 --rc genhtml_legend=1 00:04:06.212 --rc geninfo_all_blocks=1 00:04:06.212 --rc geninfo_unexecuted_blocks=1 00:04:06.212 00:04:06.212 ' 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.212 --rc genhtml_branch_coverage=1 00:04:06.212 --rc genhtml_function_coverage=1 00:04:06.212 --rc genhtml_legend=1 00:04:06.212 --rc geninfo_all_blocks=1 00:04:06.212 --rc geninfo_unexecuted_blocks=1 00:04:06.212 00:04:06.212 ' 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.212 --rc genhtml_branch_coverage=1 00:04:06.212 --rc genhtml_function_coverage=1 00:04:06.212 --rc genhtml_legend=1 00:04:06.212 --rc geninfo_all_blocks=1 00:04:06.212 --rc geninfo_unexecuted_blocks=1 00:04:06.212 00:04:06.212 ' 00:04:06.212 15:57:06 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.212 --rc genhtml_branch_coverage=1 00:04:06.212 --rc genhtml_function_coverage=1 00:04:06.212 --rc genhtml_legend=1 00:04:06.212 --rc geninfo_all_blocks=1 00:04:06.212 --rc geninfo_unexecuted_blocks=1 00:04:06.212 00:04:06.212 ' 00:04:06.212 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.212 15:57:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.212 15:57:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.213 15:57:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.213 15:57:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.213 15:57:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.213 15:57:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:06.213 15:57:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.213 15:57:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:06.213 INFO: launching applications... 00:04:06.213 15:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2543645 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.213 Waiting for target to run... 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2543645 /var/tmp/spdk_tgt.sock 00:04:06.213 15:57:06 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2543645 ']' 00:04:06.213 15:57:06 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.213 15:57:06 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.213 15:57:06 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.213 15:57:06 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.213 15:57:06 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.213 15:57:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.213 [2024-11-20 15:57:06.959999] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:06.213 [2024-11-20 15:57:06.960051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543645 ] 00:04:06.781 [2024-11-20 15:57:07.410229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.781 [2024-11-20 15:57:07.468785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.041 15:57:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.041 15:57:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:07.041 00:04:07.041 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:07.041 INFO: shutting down applications... 00:04:07.041 15:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2543645 ]] 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2543645 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2543645 00:04:07.041 15:57:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2543645 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.609 15:57:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.609 SPDK target shutdown done 00:04:07.609 15:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:07.609 Success 00:04:07.609 00:04:07.609 real 0m1.596s 00:04:07.609 user 0m1.223s 00:04:07.609 sys 0m0.584s 00:04:07.609 15:57:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.609 15:57:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:07.609 ************************************ 00:04:07.609 END TEST json_config_extra_key 00:04:07.609 ************************************ 00:04:07.609 15:57:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.609 15:57:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.609 15:57:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.609 15:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:07.609 ************************************ 00:04:07.609 START TEST alias_rpc 00:04:07.609 ************************************ 00:04:07.609 15:57:08 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.868 * Looking for test storage... 00:04:07.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.868 15:57:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.868 --rc genhtml_branch_coverage=1 00:04:07.868 --rc genhtml_function_coverage=1 00:04:07.868 --rc genhtml_legend=1 00:04:07.868 --rc geninfo_all_blocks=1 00:04:07.868 --rc geninfo_unexecuted_blocks=1 00:04:07.868 00:04:07.868 ' 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.868 --rc genhtml_branch_coverage=1 00:04:07.868 --rc genhtml_function_coverage=1 00:04:07.868 --rc genhtml_legend=1 00:04:07.868 --rc geninfo_all_blocks=1 00:04:07.868 --rc geninfo_unexecuted_blocks=1 00:04:07.868 00:04:07.868 ' 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.868 --rc genhtml_branch_coverage=1 00:04:07.868 --rc genhtml_function_coverage=1 00:04:07.868 --rc genhtml_legend=1 00:04:07.868 --rc geninfo_all_blocks=1 00:04:07.868 --rc geninfo_unexecuted_blocks=1 00:04:07.868 00:04:07.868 ' 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.868 --rc genhtml_branch_coverage=1 00:04:07.868 --rc genhtml_function_coverage=1 00:04:07.868 --rc genhtml_legend=1 00:04:07.868 --rc geninfo_all_blocks=1 00:04:07.868 --rc geninfo_unexecuted_blocks=1 00:04:07.868 00:04:07.868 ' 00:04:07.868 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.868 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2544072 00:04:07.868 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2544072 00:04:07.868 15:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2544072 ']' 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.868 15:57:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.868 [2024-11-20 15:57:08.622080] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:07.868 [2024-11-20 15:57:08.622133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544072 ] 00:04:07.868 [2024-11-20 15:57:08.696249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.126 [2024-11-20 15:57:08.740633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.692 15:57:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.692 15:57:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:08.692 15:57:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:08.949 15:57:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2544072 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2544072 ']' 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2544072 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544072 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544072' 00:04:08.949 killing process with pid 2544072 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 2544072 00:04:08.949 15:57:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 2544072 00:04:09.206 00:04:09.206 real 0m1.647s 00:04:09.206 user 0m1.819s 00:04:09.206 sys 0m0.448s 00:04:09.206 15:57:10 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.206 15:57:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.206 ************************************ 00:04:09.206 END TEST alias_rpc 00:04:09.206 ************************************ 00:04:09.485 15:57:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:09.485 15:57:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.485 15:57:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.485 15:57:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.485 15:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:09.485 ************************************ 00:04:09.485 START TEST spdkcli_tcp 00:04:09.485 ************************************ 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.485 * Looking for test storage... 00:04:09.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.485 15:57:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.485 --rc genhtml_branch_coverage=1 00:04:09.485 --rc genhtml_function_coverage=1 00:04:09.485 --rc genhtml_legend=1 00:04:09.485 --rc geninfo_all_blocks=1 00:04:09.485 --rc geninfo_unexecuted_blocks=1 00:04:09.485 00:04:09.485 ' 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.485 --rc genhtml_branch_coverage=1 00:04:09.485 --rc genhtml_function_coverage=1 00:04:09.485 --rc genhtml_legend=1 00:04:09.485 --rc geninfo_all_blocks=1 00:04:09.485 --rc geninfo_unexecuted_blocks=1 00:04:09.485 00:04:09.485 ' 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.485 --rc genhtml_branch_coverage=1 00:04:09.485 --rc genhtml_function_coverage=1 00:04:09.485 --rc genhtml_legend=1 00:04:09.485 --rc geninfo_all_blocks=1 00:04:09.485 --rc geninfo_unexecuted_blocks=1 00:04:09.485 00:04:09.485 ' 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.485 --rc genhtml_branch_coverage=1 00:04:09.485 --rc genhtml_function_coverage=1 00:04:09.485 --rc genhtml_legend=1 00:04:09.485 --rc geninfo_all_blocks=1 00:04:09.485 --rc geninfo_unexecuted_blocks=1 00:04:09.485 00:04:09.485 ' 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2544644 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2544644 00:04:09.485 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2544644 ']' 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.485 15:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.744 [2024-11-20 15:57:10.343191] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:09.744 [2024-11-20 15:57:10.343243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544644 ] 00:04:09.744 [2024-11-20 15:57:10.416739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.744 [2024-11-20 15:57:10.459650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.744 [2024-11-20 15:57:10.459651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.001 15:57:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.001 15:57:10 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:10.001 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2544848 00:04:10.001 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:10.001 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:10.258 [ 00:04:10.258 "bdev_malloc_delete", 00:04:10.258 "bdev_malloc_create", 00:04:10.258 "bdev_null_resize", 00:04:10.258 "bdev_null_delete", 00:04:10.258 "bdev_null_create", 00:04:10.258 "bdev_nvme_cuse_unregister", 00:04:10.258 "bdev_nvme_cuse_register", 00:04:10.258 "bdev_opal_new_user", 00:04:10.258 "bdev_opal_set_lock_state", 00:04:10.258 "bdev_opal_delete", 00:04:10.258 "bdev_opal_get_info", 00:04:10.258 "bdev_opal_create", 00:04:10.258 "bdev_nvme_opal_revert", 00:04:10.258 "bdev_nvme_opal_init", 00:04:10.258 "bdev_nvme_send_cmd", 00:04:10.258 "bdev_nvme_set_keys", 00:04:10.258 "bdev_nvme_get_path_iostat", 00:04:10.258 "bdev_nvme_get_mdns_discovery_info", 00:04:10.258 "bdev_nvme_stop_mdns_discovery", 00:04:10.258 "bdev_nvme_start_mdns_discovery", 00:04:10.258 "bdev_nvme_set_multipath_policy", 00:04:10.258 "bdev_nvme_set_preferred_path", 00:04:10.258 "bdev_nvme_get_io_paths", 00:04:10.258 "bdev_nvme_remove_error_injection", 00:04:10.258 "bdev_nvme_add_error_injection", 00:04:10.258 "bdev_nvme_get_discovery_info", 00:04:10.258 "bdev_nvme_stop_discovery", 00:04:10.258 "bdev_nvme_start_discovery", 00:04:10.258 "bdev_nvme_get_controller_health_info", 00:04:10.258 "bdev_nvme_disable_controller", 00:04:10.258 "bdev_nvme_enable_controller", 00:04:10.258 "bdev_nvme_reset_controller", 00:04:10.258 "bdev_nvme_get_transport_statistics", 00:04:10.259 "bdev_nvme_apply_firmware", 00:04:10.259 "bdev_nvme_detach_controller", 00:04:10.259 "bdev_nvme_get_controllers", 00:04:10.259 "bdev_nvme_attach_controller", 00:04:10.259 "bdev_nvme_set_hotplug", 00:04:10.259 "bdev_nvme_set_options", 00:04:10.259 "bdev_passthru_delete", 00:04:10.259 "bdev_passthru_create", 00:04:10.259 "bdev_lvol_set_parent_bdev", 00:04:10.259 "bdev_lvol_set_parent", 00:04:10.259 "bdev_lvol_check_shallow_copy", 00:04:10.259 "bdev_lvol_start_shallow_copy", 00:04:10.259 "bdev_lvol_grow_lvstore", 00:04:10.259 "bdev_lvol_get_lvols", 00:04:10.259 "bdev_lvol_get_lvstores", 00:04:10.259 "bdev_lvol_delete", 00:04:10.259 "bdev_lvol_set_read_only", 00:04:10.259 "bdev_lvol_resize", 00:04:10.259 "bdev_lvol_decouple_parent", 00:04:10.259 "bdev_lvol_inflate", 00:04:10.259 "bdev_lvol_rename", 00:04:10.259 "bdev_lvol_clone_bdev", 00:04:10.259 "bdev_lvol_clone", 00:04:10.259 "bdev_lvol_snapshot", 00:04:10.259 "bdev_lvol_create", 00:04:10.259 "bdev_lvol_delete_lvstore", 00:04:10.259 "bdev_lvol_rename_lvstore", 00:04:10.259 "bdev_lvol_create_lvstore", 00:04:10.259 "bdev_raid_set_options", 00:04:10.259 "bdev_raid_remove_base_bdev", 00:04:10.259 "bdev_raid_add_base_bdev", 00:04:10.259 "bdev_raid_delete", 00:04:10.259 "bdev_raid_create", 00:04:10.259 "bdev_raid_get_bdevs", 00:04:10.259 "bdev_error_inject_error", 00:04:10.259 "bdev_error_delete", 00:04:10.259 "bdev_error_create", 00:04:10.259 "bdev_split_delete", 00:04:10.259 "bdev_split_create", 00:04:10.259 "bdev_delay_delete", 00:04:10.259 "bdev_delay_create", 00:04:10.259 "bdev_delay_update_latency", 00:04:10.259 "bdev_zone_block_delete", 00:04:10.259 "bdev_zone_block_create", 00:04:10.259 "blobfs_create", 00:04:10.259 "blobfs_detect", 00:04:10.259 "blobfs_set_cache_size", 00:04:10.259 "bdev_aio_delete", 00:04:10.259 "bdev_aio_rescan", 00:04:10.259 "bdev_aio_create", 00:04:10.259 "bdev_ftl_set_property", 00:04:10.259 "bdev_ftl_get_properties", 00:04:10.259 "bdev_ftl_get_stats", 00:04:10.259 "bdev_ftl_unmap", 00:04:10.259 "bdev_ftl_unload", 00:04:10.259 "bdev_ftl_delete", 00:04:10.259 "bdev_ftl_load", 00:04:10.259 "bdev_ftl_create", 00:04:10.259 "bdev_virtio_attach_controller", 00:04:10.259 "bdev_virtio_scsi_get_devices", 00:04:10.259 "bdev_virtio_detach_controller", 00:04:10.259 "bdev_virtio_blk_set_hotplug", 00:04:10.259 "bdev_iscsi_delete", 00:04:10.259 "bdev_iscsi_create", 00:04:10.259 "bdev_iscsi_set_options", 00:04:10.259 "accel_error_inject_error", 00:04:10.259 "ioat_scan_accel_module", 00:04:10.259 "dsa_scan_accel_module", 00:04:10.259 "iaa_scan_accel_module", 00:04:10.259 "vfu_virtio_create_fs_endpoint", 00:04:10.259 "vfu_virtio_create_scsi_endpoint", 00:04:10.259 "vfu_virtio_scsi_remove_target", 00:04:10.259 "vfu_virtio_scsi_add_target", 00:04:10.259 "vfu_virtio_create_blk_endpoint", 00:04:10.259 "vfu_virtio_delete_endpoint", 00:04:10.259 "keyring_file_remove_key", 00:04:10.259 "keyring_file_add_key", 00:04:10.259 "keyring_linux_set_options", 00:04:10.259 "fsdev_aio_delete", 00:04:10.259 "fsdev_aio_create", 00:04:10.259 "iscsi_get_histogram", 00:04:10.259 "iscsi_enable_histogram", 00:04:10.259 "iscsi_set_options", 00:04:10.259 "iscsi_get_auth_groups", 00:04:10.259 "iscsi_auth_group_remove_secret", 00:04:10.259 "iscsi_auth_group_add_secret", 00:04:10.259 "iscsi_delete_auth_group", 00:04:10.259 "iscsi_create_auth_group", 00:04:10.259 "iscsi_set_discovery_auth", 00:04:10.259 "iscsi_get_options", 00:04:10.259 "iscsi_target_node_request_logout", 00:04:10.259 "iscsi_target_node_set_redirect", 00:04:10.259 "iscsi_target_node_set_auth", 00:04:10.259 "iscsi_target_node_add_lun", 00:04:10.259 "iscsi_get_stats", 00:04:10.259 "iscsi_get_connections", 00:04:10.259 "iscsi_portal_group_set_auth", 00:04:10.259 "iscsi_start_portal_group", 00:04:10.259 "iscsi_delete_portal_group", 00:04:10.259 "iscsi_create_portal_group", 00:04:10.259 "iscsi_get_portal_groups", 00:04:10.259 "iscsi_delete_target_node", 00:04:10.259 "iscsi_target_node_remove_pg_ig_maps", 00:04:10.259 "iscsi_target_node_add_pg_ig_maps", 00:04:10.259 "iscsi_create_target_node", 00:04:10.259 "iscsi_get_target_nodes", 00:04:10.259 "iscsi_delete_initiator_group", 00:04:10.259 "iscsi_initiator_group_remove_initiators", 00:04:10.259 "iscsi_initiator_group_add_initiators", 00:04:10.259 "iscsi_create_initiator_group", 00:04:10.259 "iscsi_get_initiator_groups", 00:04:10.259 "nvmf_set_crdt", 00:04:10.259 "nvmf_set_config", 00:04:10.259 "nvmf_set_max_subsystems", 00:04:10.259 "nvmf_stop_mdns_prr", 00:04:10.259 "nvmf_publish_mdns_prr", 00:04:10.259 "nvmf_subsystem_get_listeners", 00:04:10.259 "nvmf_subsystem_get_qpairs", 00:04:10.259 "nvmf_subsystem_get_controllers", 00:04:10.259 "nvmf_get_stats", 00:04:10.259 "nvmf_get_transports", 00:04:10.259 "nvmf_create_transport", 00:04:10.259 "nvmf_get_targets", 00:04:10.259 "nvmf_delete_target", 00:04:10.259 "nvmf_create_target", 00:04:10.259 "nvmf_subsystem_allow_any_host", 00:04:10.259 "nvmf_subsystem_set_keys", 00:04:10.259 "nvmf_subsystem_remove_host", 00:04:10.259 "nvmf_subsystem_add_host", 00:04:10.259 "nvmf_ns_remove_host", 00:04:10.259 "nvmf_ns_add_host", 00:04:10.259 "nvmf_subsystem_remove_ns", 00:04:10.259 "nvmf_subsystem_set_ns_ana_group", 00:04:10.259 "nvmf_subsystem_add_ns", 00:04:10.259 "nvmf_subsystem_listener_set_ana_state", 00:04:10.259 "nvmf_discovery_get_referrals", 00:04:10.259 "nvmf_discovery_remove_referral", 00:04:10.259 "nvmf_discovery_add_referral", 00:04:10.259 "nvmf_subsystem_remove_listener", 00:04:10.259 "nvmf_subsystem_add_listener", 00:04:10.259 "nvmf_delete_subsystem", 00:04:10.259 "nvmf_create_subsystem", 00:04:10.259 "nvmf_get_subsystems", 00:04:10.259 "env_dpdk_get_mem_stats", 00:04:10.259 "nbd_get_disks", 00:04:10.259 "nbd_stop_disk", 00:04:10.259 "nbd_start_disk", 00:04:10.259 "ublk_recover_disk", 00:04:10.259 "ublk_get_disks", 00:04:10.259 "ublk_stop_disk", 00:04:10.259 "ublk_start_disk", 00:04:10.259 "ublk_destroy_target", 00:04:10.259 "ublk_create_target", 00:04:10.259 "virtio_blk_create_transport", 00:04:10.259 "virtio_blk_get_transports", 00:04:10.259 "vhost_controller_set_coalescing", 00:04:10.259 "vhost_get_controllers", 00:04:10.259 "vhost_delete_controller", 00:04:10.259 "vhost_create_blk_controller", 00:04:10.259 "vhost_scsi_controller_remove_target", 00:04:10.259 "vhost_scsi_controller_add_target", 00:04:10.259 "vhost_start_scsi_controller", 00:04:10.259 "vhost_create_scsi_controller", 00:04:10.259 "thread_set_cpumask", 00:04:10.259 "scheduler_set_options", 00:04:10.259 "framework_get_governor", 00:04:10.259 "framework_get_scheduler", 00:04:10.259 "framework_set_scheduler", 00:04:10.259 "framework_get_reactors", 00:04:10.259 "thread_get_io_channels", 00:04:10.259 "thread_get_pollers", 00:04:10.259 "thread_get_stats", 00:04:10.259 "framework_monitor_context_switch", 00:04:10.259 "spdk_kill_instance", 00:04:10.259 "log_enable_timestamps", 00:04:10.259 "log_get_flags", 00:04:10.259 "log_clear_flag", 00:04:10.259 "log_set_flag", 00:04:10.259 "log_get_level", 00:04:10.259 "log_set_level", 00:04:10.259 "log_get_print_level", 00:04:10.259 "log_set_print_level", 00:04:10.259 "framework_enable_cpumask_locks", 00:04:10.259 "framework_disable_cpumask_locks", 00:04:10.259 "framework_wait_init", 00:04:10.259 "framework_start_init", 00:04:10.259 "scsi_get_devices", 00:04:10.259 "bdev_get_histogram", 00:04:10.259 "bdev_enable_histogram", 00:04:10.259 "bdev_set_qos_limit", 00:04:10.259 "bdev_set_qd_sampling_period", 00:04:10.259 "bdev_get_bdevs", 00:04:10.259 "bdev_reset_iostat", 00:04:10.259 "bdev_get_iostat", 00:04:10.259 "bdev_examine", 00:04:10.259 "bdev_wait_for_examine", 00:04:10.259 "bdev_set_options", 00:04:10.259 "accel_get_stats", 00:04:10.259 "accel_set_options", 00:04:10.259 "accel_set_driver", 00:04:10.259 "accel_crypto_key_destroy", 00:04:10.259 "accel_crypto_keys_get", 00:04:10.259 "accel_crypto_key_create", 00:04:10.259 "accel_assign_opc", 00:04:10.259 "accel_get_module_info", 00:04:10.259 "accel_get_opc_assignments", 00:04:10.259 "vmd_rescan", 00:04:10.259 "vmd_remove_device", 00:04:10.259 "vmd_enable", 00:04:10.259 "sock_get_default_impl", 00:04:10.259 "sock_set_default_impl", 00:04:10.259 "sock_impl_set_options", 00:04:10.259 "sock_impl_get_options", 00:04:10.259 "iobuf_get_stats", 00:04:10.259 "iobuf_set_options", 00:04:10.259 "keyring_get_keys", 00:04:10.259 "vfu_tgt_set_base_path", 00:04:10.259 "framework_get_pci_devices", 00:04:10.259 "framework_get_config", 00:04:10.259 "framework_get_subsystems", 00:04:10.259 "fsdev_set_opts", 00:04:10.259 "fsdev_get_opts", 00:04:10.259 "trace_get_info", 00:04:10.259 "trace_get_tpoint_group_mask", 00:04:10.259 "trace_disable_tpoint_group", 00:04:10.259 "trace_enable_tpoint_group", 00:04:10.259 "trace_clear_tpoint_mask", 00:04:10.259 "trace_set_tpoint_mask", 00:04:10.259 "notify_get_notifications", 00:04:10.259 "notify_get_types", 00:04:10.259 "spdk_get_version", 00:04:10.259 "rpc_get_methods" 00:04:10.259 ] 00:04:10.259 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.259 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:10.259 15:57:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2544644 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2544644 ']' 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2544644 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544644 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544644' 00:04:10.259 killing process with pid 2544644 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2544644 00:04:10.259 15:57:10 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2544644 00:04:10.516 00:04:10.516 real 0m1.159s 00:04:10.516 user 0m1.964s 00:04:10.516 sys 0m0.439s 00:04:10.516 15:57:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.516 15:57:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.516 ************************************ 00:04:10.516 END TEST spdkcli_tcp 00:04:10.516 ************************************ 00:04:10.517 15:57:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.517 15:57:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.517 15:57:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.517 15:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.517 ************************************ 00:04:10.517 START TEST dpdk_mem_utility 00:04:10.517 ************************************ 00:04:10.517 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.774 * Looking for test storage... 00:04:10.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.774 15:57:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.774 --rc genhtml_branch_coverage=1 00:04:10.774 --rc genhtml_function_coverage=1 00:04:10.774 --rc genhtml_legend=1 00:04:10.774 --rc geninfo_all_blocks=1 00:04:10.774 --rc geninfo_unexecuted_blocks=1 00:04:10.774 00:04:10.774 ' 00:04:10.774 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.774 --rc genhtml_branch_coverage=1 00:04:10.774 --rc genhtml_function_coverage=1 00:04:10.774 --rc genhtml_legend=1 00:04:10.774 --rc geninfo_all_blocks=1 00:04:10.775 --rc geninfo_unexecuted_blocks=1 00:04:10.775 00:04:10.775 ' 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.775 --rc genhtml_branch_coverage=1 00:04:10.775 --rc genhtml_function_coverage=1 00:04:10.775 --rc genhtml_legend=1 00:04:10.775 --rc geninfo_all_blocks=1 00:04:10.775 --rc geninfo_unexecuted_blocks=1 00:04:10.775 00:04:10.775 ' 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.775 --rc genhtml_branch_coverage=1 00:04:10.775 --rc genhtml_function_coverage=1 00:04:10.775 --rc genhtml_legend=1 00:04:10.775 --rc geninfo_all_blocks=1 00:04:10.775 --rc geninfo_unexecuted_blocks=1 00:04:10.775 00:04:10.775 ' 00:04:10.775 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.775 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2544934 00:04:10.775 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2544934 00:04:10.775 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2544934 ']' 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.775 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.775 [2024-11-20 15:57:11.567517] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:10.775 [2024-11-20 15:57:11.567567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544934 ] 00:04:11.033 [2024-11-20 15:57:11.640855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.033 [2024-11-20 15:57:11.681599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.293 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.293 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:11.293 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:11.293 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:11.293 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.293 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.293 { 00:04:11.293 "filename": "/tmp/spdk_mem_dump.txt" 00:04:11.293 } 00:04:11.293 15:57:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.293 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:11.293 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:11.293 1 heaps totaling size 818.000000 MiB 00:04:11.293 size: 818.000000 MiB heap id: 0 00:04:11.293 end heaps---------- 00:04:11.293 9 mempools totaling size 603.782043 MiB 00:04:11.293 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:11.293 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:11.293 size: 100.555481 MiB name: bdev_io_2544934 00:04:11.293 size: 50.003479 MiB name: msgpool_2544934 00:04:11.293 size: 36.509338 MiB name: fsdev_io_2544934 00:04:11.293 size: 21.763794 MiB name: PDU_Pool 00:04:11.293 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:11.293 size: 4.133484 MiB name: evtpool_2544934 00:04:11.293 size: 0.026123 MiB name: Session_Pool 00:04:11.293 end mempools------- 00:04:11.293 6 memzones totaling size 4.142822 MiB 00:04:11.293 size: 1.000366 MiB name: RG_ring_0_2544934 00:04:11.293 size: 1.000366 MiB name: RG_ring_1_2544934 00:04:11.293 size: 1.000366 MiB name: RG_ring_4_2544934 00:04:11.293 size: 1.000366 MiB name: RG_ring_5_2544934 00:04:11.293 size: 0.125366 MiB name: RG_ring_2_2544934 00:04:11.293 size: 0.015991 MiB name: RG_ring_3_2544934 00:04:11.293 end memzones------- 00:04:11.293 15:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:11.293 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:11.293 list of free elements. size: 10.852478 MiB 00:04:11.293 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:11.293 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:11.293 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:11.293 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:11.293 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:11.293 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:11.293 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:11.293 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:11.293 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:11.293 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:11.293 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:11.293 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:11.293 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:11.293 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:11.293 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:11.293 list of standard malloc elements. size: 199.218628 MiB 00:04:11.293 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:11.293 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:11.293 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:11.293 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:11.293 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:11.293 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:11.293 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:11.293 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:11.293 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:11.293 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:11.293 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:11.293 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:11.293 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:11.293 list of memzone associated elements. size: 607.928894 MiB 00:04:11.293 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:11.293 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:11.293 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:11.293 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:11.293 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:11.293 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2544934_0 00:04:11.293 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:11.293 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2544934_0 00:04:11.293 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:11.293 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2544934_0 00:04:11.293 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:11.293 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:11.293 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:11.294 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:11.294 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:11.294 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2544934_0 00:04:11.294 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:11.294 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2544934 00:04:11.294 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:11.294 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2544934 00:04:11.294 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:11.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:11.294 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:11.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:11.294 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:11.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:11.294 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:11.294 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:11.294 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:11.294 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2544934 00:04:11.294 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:11.294 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2544934 00:04:11.294 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:11.294 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2544934 00:04:11.294 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:11.294 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2544934 00:04:11.294 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:11.294 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2544934 00:04:11.294 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:11.294 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2544934 00:04:11.294 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:11.294 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:11.294 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:11.294 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:11.294 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:11.294 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:11.294 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:11.294 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2544934 00:04:11.294 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:11.294 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2544934 00:04:11.294 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:11.294 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:11.294 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:11.294 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:11.294 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:11.294 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2544934 00:04:11.294 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:11.294 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:11.294 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:11.294 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2544934 00:04:11.294 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:11.294 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2544934 00:04:11.294 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:11.294 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2544934 00:04:11.294 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:11.294 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:11.294 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:11.294 15:57:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2544934 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2544934 ']' 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2544934 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544934 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544934' 00:04:11.294 killing process with pid 2544934 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2544934 00:04:11.294 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2544934 00:04:11.552 00:04:11.552 real 0m1.035s 00:04:11.552 user 0m0.994s 00:04:11.552 sys 0m0.387s 00:04:11.552 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.552 15:57:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.552 ************************************ 00:04:11.552 END TEST dpdk_mem_utility 00:04:11.552 ************************************ 00:04:11.811 15:57:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.811 15:57:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.811 15:57:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.811 15:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.811 ************************************ 00:04:11.811 START TEST event 00:04:11.811 ************************************ 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.811 * Looking for test storage... 00:04:11.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:11.811 15:57:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.811 15:57:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.811 15:57:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.811 15:57:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.811 15:57:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.811 15:57:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.811 15:57:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.811 15:57:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.811 15:57:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.811 15:57:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.811 15:57:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.811 15:57:12 event -- scripts/common.sh@344 -- # case "$op" in 00:04:11.811 15:57:12 event -- scripts/common.sh@345 -- # : 1 00:04:11.811 15:57:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.811 15:57:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.811 15:57:12 event -- scripts/common.sh@365 -- # decimal 1 00:04:11.811 15:57:12 event -- scripts/common.sh@353 -- # local d=1 00:04:11.811 15:57:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.811 15:57:12 event -- scripts/common.sh@355 -- # echo 1 00:04:11.811 15:57:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.811 15:57:12 event -- scripts/common.sh@366 -- # decimal 2 00:04:11.811 15:57:12 event -- scripts/common.sh@353 -- # local d=2 00:04:11.811 15:57:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.811 15:57:12 event -- scripts/common.sh@355 -- # echo 2 00:04:11.811 15:57:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.811 15:57:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.811 15:57:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.811 15:57:12 event -- scripts/common.sh@368 -- # return 0 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.811 --rc genhtml_branch_coverage=1 00:04:11.811 --rc genhtml_function_coverage=1 00:04:11.811 --rc genhtml_legend=1 00:04:11.811 --rc geninfo_all_blocks=1 00:04:11.811 --rc geninfo_unexecuted_blocks=1 00:04:11.811 00:04:11.811 ' 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.811 --rc genhtml_branch_coverage=1 00:04:11.811 --rc genhtml_function_coverage=1 00:04:11.811 --rc genhtml_legend=1 00:04:11.811 --rc geninfo_all_blocks=1 00:04:11.811 --rc geninfo_unexecuted_blocks=1 00:04:11.811 00:04:11.811 ' 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.811 --rc genhtml_branch_coverage=1 00:04:11.811 --rc genhtml_function_coverage=1 00:04:11.811 --rc genhtml_legend=1 00:04:11.811 --rc geninfo_all_blocks=1 00:04:11.811 --rc geninfo_unexecuted_blocks=1 00:04:11.811 00:04:11.811 ' 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.811 --rc genhtml_branch_coverage=1 00:04:11.811 --rc genhtml_function_coverage=1 00:04:11.811 --rc genhtml_legend=1 00:04:11.811 --rc geninfo_all_blocks=1 00:04:11.811 --rc geninfo_unexecuted_blocks=1 00:04:11.811 00:04:11.811 ' 00:04:11.811 15:57:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:11.811 15:57:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:11.811 15:57:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:11.811 15:57:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.811 15:57:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.069 ************************************ 00:04:12.069 START TEST event_perf 00:04:12.069 ************************************ 00:04:12.069 15:57:12 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:12.069 Running I/O for 1 seconds...[2024-11-20 15:57:12.672560] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:12.069 [2024-11-20 15:57:12.672629] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545225 ] 00:04:12.069 [2024-11-20 15:57:12.751909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:12.069 [2024-11-20 15:57:12.796199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.069 [2024-11-20 15:57:12.796308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.069 [2024-11-20 15:57:12.796391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.069 [2024-11-20 15:57:12.796392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:13.003 Running I/O for 1 seconds... 00:04:13.003 lcore 0: 205634 00:04:13.003 lcore 1: 205633 00:04:13.003 lcore 2: 205634 00:04:13.003 lcore 3: 205635 00:04:13.003 done. 00:04:13.003 00:04:13.003 real 0m1.184s 00:04:13.003 user 0m4.099s 00:04:13.003 sys 0m0.082s 00:04:13.003 15:57:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.003 15:57:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:13.003 ************************************ 00:04:13.003 END TEST event_perf 00:04:13.003 ************************************ 00:04:13.263 15:57:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:13.263 15:57:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:13.263 15:57:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.263 15:57:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.263 ************************************ 00:04:13.263 START TEST event_reactor 00:04:13.263 ************************************ 00:04:13.263 15:57:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:13.263 [2024-11-20 15:57:13.930280] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:13.263 [2024-11-20 15:57:13.930341] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545478 ] 00:04:13.263 [2024-11-20 15:57:14.007397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.263 [2024-11-20 15:57:14.048029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.641 test_start 00:04:14.641 oneshot 00:04:14.641 tick 100 00:04:14.641 tick 100 00:04:14.641 tick 250 00:04:14.641 tick 100 00:04:14.641 tick 100 00:04:14.641 tick 100 00:04:14.641 tick 250 00:04:14.641 tick 500 00:04:14.641 tick 100 00:04:14.641 tick 100 00:04:14.641 tick 250 00:04:14.641 tick 100 00:04:14.641 tick 100 00:04:14.641 test_end 00:04:14.641 00:04:14.641 real 0m1.176s 00:04:14.641 user 0m1.098s 00:04:14.641 sys 0m0.073s 00:04:14.641 15:57:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.641 15:57:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:14.641 ************************************ 00:04:14.641 END TEST event_reactor 00:04:14.641 ************************************ 00:04:14.641 15:57:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.641 15:57:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:14.641 15:57:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.641 15:57:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.641 ************************************ 00:04:14.641 START TEST event_reactor_perf 00:04:14.641 ************************************ 00:04:14.641 15:57:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.641 [2024-11-20 15:57:15.179360] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:14.641 [2024-11-20 15:57:15.179431] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545724 ] 00:04:14.641 [2024-11-20 15:57:15.261794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.641 [2024-11-20 15:57:15.303092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.579 test_start 00:04:15.579 test_end 00:04:15.579 Performance: 509519 events per second 00:04:15.579 00:04:15.579 real 0m1.183s 00:04:15.579 user 0m1.101s 00:04:15.579 sys 0m0.077s 00:04:15.579 15:57:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.579 15:57:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:15.579 ************************************ 00:04:15.579 END TEST event_reactor_perf 00:04:15.579 ************************************ 00:04:15.579 15:57:16 event -- event/event.sh@49 -- # uname -s 00:04:15.579 15:57:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:15.579 15:57:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.579 15:57:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.579 15:57:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.579 15:57:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.838 ************************************ 00:04:15.838 START TEST event_scheduler 00:04:15.838 ************************************ 00:04:15.838 15:57:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.838 * Looking for test storage... 00:04:15.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:15.838 15:57:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.838 15:57:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.838 15:57:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.838 15:57:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.838 15:57:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.839 --rc genhtml_branch_coverage=1 00:04:15.839 --rc genhtml_function_coverage=1 00:04:15.839 --rc genhtml_legend=1 00:04:15.839 --rc geninfo_all_blocks=1 00:04:15.839 --rc geninfo_unexecuted_blocks=1 00:04:15.839 00:04:15.839 ' 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.839 --rc genhtml_branch_coverage=1 00:04:15.839 --rc genhtml_function_coverage=1 00:04:15.839 --rc genhtml_legend=1 00:04:15.839 --rc geninfo_all_blocks=1 00:04:15.839 --rc geninfo_unexecuted_blocks=1 00:04:15.839 00:04:15.839 ' 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.839 --rc genhtml_branch_coverage=1 00:04:15.839 --rc genhtml_function_coverage=1 00:04:15.839 --rc genhtml_legend=1 00:04:15.839 --rc geninfo_all_blocks=1 00:04:15.839 --rc geninfo_unexecuted_blocks=1 00:04:15.839 00:04:15.839 ' 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.839 --rc genhtml_branch_coverage=1 00:04:15.839 --rc genhtml_function_coverage=1 00:04:15.839 --rc genhtml_legend=1 00:04:15.839 --rc geninfo_all_blocks=1 00:04:15.839 --rc geninfo_unexecuted_blocks=1 00:04:15.839 00:04:15.839 ' 00:04:15.839 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:15.839 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2546015 00:04:15.839 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:15.839 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.839 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2546015 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2546015 ']' 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.839 15:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 [2024-11-20 15:57:16.635300] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:15.839 [2024-11-20 15:57:16.635345] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546015 ] 00:04:16.098 [2024-11-20 15:57:16.709884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:16.098 [2024-11-20 15:57:16.755580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.098 [2024-11-20 15:57:16.755671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.098 [2024-11-20 15:57:16.755759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:16.098 [2024-11-20 15:57:16.755760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:16.098 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.098 [2024-11-20 15:57:16.792361] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:16.098 [2024-11-20 15:57:16.792377] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:16.098 [2024-11-20 15:57:16.792386] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:16.098 [2024-11-20 15:57:16.792392] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:16.098 [2024-11-20 15:57:16.792397] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.098 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.098 [2024-11-20 15:57:16.867187] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.098 15:57:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.098 15:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.098 ************************************ 00:04:16.098 START TEST scheduler_create_thread 00:04:16.098 ************************************ 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.098 2 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.098 3 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.098 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 4 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 5 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 6 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 7 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 8 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 9 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.357 10 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.357 15:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:16.358 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.358 15:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.358 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.358 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:16.358 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:16.358 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.358 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.924 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.924 15:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:16.924 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.924 15:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.298 15:57:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.298 15:57:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:18.298 15:57:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:18.298 15:57:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.298 15:57:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.231 15:57:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.231 00:04:19.231 real 0m3.099s 00:04:19.231 user 0m0.024s 00:04:19.231 sys 0m0.005s 00:04:19.231 15:57:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.231 15:57:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.231 ************************************ 00:04:19.231 END TEST scheduler_create_thread 00:04:19.231 ************************************ 00:04:19.231 15:57:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:19.231 15:57:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2546015 00:04:19.231 15:57:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2546015 ']' 00:04:19.231 15:57:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2546015 00:04:19.231 15:57:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:19.231 15:57:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.231 15:57:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546015 00:04:19.490 15:57:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:19.490 15:57:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:19.490 15:57:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546015' 00:04:19.490 killing process with pid 2546015 00:04:19.490 15:57:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2546015 00:04:19.490 15:57:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2546015 00:04:19.749 [2024-11-20 15:57:20.382426] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:19.749 00:04:19.749 real 0m4.153s 00:04:19.749 user 0m6.626s 00:04:19.749 sys 0m0.369s 00:04:19.749 15:57:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.749 15:57:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.749 ************************************ 00:04:19.749 END TEST event_scheduler 00:04:19.749 ************************************ 00:04:20.007 15:57:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:20.007 15:57:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:20.007 15:57:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.007 15:57:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.007 15:57:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.007 ************************************ 00:04:20.007 START TEST app_repeat 00:04:20.007 ************************************ 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2546754 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2546754' 00:04:20.007 Process app_repeat pid: 2546754 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:20.007 spdk_app_start Round 0 00:04:20.007 15:57:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2546754 /var/tmp/spdk-nbd.sock 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2546754 ']' 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.007 15:57:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.007 [2024-11-20 15:57:20.683521] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:20.007 [2024-11-20 15:57:20.683577] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546754 ] 00:04:20.007 [2024-11-20 15:57:20.741248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.007 [2024-11-20 15:57:20.783968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.007 [2024-11-20 15:57:20.783972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.266 15:57:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.266 15:57:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:20.266 15:57:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.266 Malloc0 00:04:20.266 15:57:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.525 Malloc1 00:04:20.525 15:57:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.525 15:57:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.783 /dev/nbd0 00:04:20.783 15:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.783 15:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.783 1+0 records in 00:04:20.783 1+0 records out 00:04:20.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230039 s, 17.8 MB/s 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:20.783 15:57:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:20.783 15:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.783 15:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.783 15:57:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:21.042 /dev/nbd1 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.042 1+0 records in 00:04:21.042 1+0 records out 00:04:21.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211623 s, 19.4 MB/s 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:21.042 15:57:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.042 15:57:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:21.301 { 00:04:21.301 "nbd_device": "/dev/nbd0", 00:04:21.301 "bdev_name": "Malloc0" 00:04:21.301 }, 00:04:21.301 { 00:04:21.301 "nbd_device": "/dev/nbd1", 00:04:21.301 "bdev_name": "Malloc1" 00:04:21.301 } 00:04:21.301 ]' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:21.301 { 00:04:21.301 "nbd_device": "/dev/nbd0", 00:04:21.301 "bdev_name": "Malloc0" 00:04:21.301 }, 00:04:21.301 { 00:04:21.301 "nbd_device": "/dev/nbd1", 00:04:21.301 "bdev_name": "Malloc1" 00:04:21.301 } 00:04:21.301 ]' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:21.301 /dev/nbd1' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:21.301 /dev/nbd1' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:21.301 256+0 records in 00:04:21.301 256+0 records out 00:04:21.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994262 s, 105 MB/s 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:21.301 256+0 records in 00:04:21.301 256+0 records out 00:04:21.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148182 s, 70.8 MB/s 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:21.301 256+0 records in 00:04:21.301 256+0 records out 00:04:21.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152795 s, 68.6 MB/s 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.301 15:57:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.560 15:57:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.819 15:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.080 15:57:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.080 15:57:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.340 15:57:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:22.599 [2024-11-20 15:57:23.176240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.599 [2024-11-20 15:57:23.214207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.599 [2024-11-20 15:57:23.214209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.599 [2024-11-20 15:57:23.255268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:22.599 [2024-11-20 15:57:23.255314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:25.886 15:57:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.886 15:57:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:25.886 spdk_app_start Round 1 00:04:25.886 15:57:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2546754 /var/tmp/spdk-nbd.sock 00:04:25.886 15:57:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2546754 ']' 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.887 15:57:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:25.887 15:57:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.887 Malloc0 00:04:25.887 15:57:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.887 Malloc1 00:04:25.887 15:57:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.887 15:57:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.145 /dev/nbd0 00:04:26.145 15:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.145 15:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.145 1+0 records in 00:04:26.145 1+0 records out 00:04:26.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201161 s, 20.4 MB/s 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:26.145 15:57:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:26.145 15:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.145 15:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.145 15:57:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.404 /dev/nbd1 00:04:26.404 15:57:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.404 15:57:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.404 1+0 records in 00:04:26.404 1+0 records out 00:04:26.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225533 s, 18.2 MB/s 00:04:26.404 15:57:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.405 15:57:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:26.405 15:57:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.405 15:57:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:26.405 15:57:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:26.405 15:57:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.405 15:57:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.405 15:57:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.405 15:57:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.405 15:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.673 { 00:04:26.673 "nbd_device": "/dev/nbd0", 00:04:26.673 "bdev_name": "Malloc0" 00:04:26.673 }, 00:04:26.673 { 00:04:26.673 "nbd_device": "/dev/nbd1", 00:04:26.673 "bdev_name": "Malloc1" 00:04:26.673 } 00:04:26.673 ]' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.673 { 00:04:26.673 "nbd_device": "/dev/nbd0", 00:04:26.673 "bdev_name": "Malloc0" 00:04:26.673 }, 00:04:26.673 { 00:04:26.673 "nbd_device": "/dev/nbd1", 00:04:26.673 "bdev_name": "Malloc1" 00:04:26.673 } 00:04:26.673 ]' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.673 /dev/nbd1' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.673 /dev/nbd1' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.673 256+0 records in 00:04:26.673 256+0 records out 00:04:26.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609213 s, 172 MB/s 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.673 256+0 records in 00:04:26.673 256+0 records out 00:04:26.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014035 s, 74.7 MB/s 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.673 256+0 records in 00:04:26.673 256+0 records out 00:04:26.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151277 s, 69.3 MB/s 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.673 15:57:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.674 15:57:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.674 15:57:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.674 15:57:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.674 15:57:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.674 15:57:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.931 15:57:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.192 15:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.517 15:57:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.517 15:57:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.805 15:57:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.805 [2024-11-20 15:57:28.509057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.805 [2024-11-20 15:57:28.546479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.805 [2024-11-20 15:57:28.546480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.805 [2024-11-20 15:57:28.588167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.805 [2024-11-20 15:57:28.588209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.110 15:57:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.110 15:57:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:31.110 spdk_app_start Round 2 00:04:31.111 15:57:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2546754 /var/tmp/spdk-nbd.sock 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2546754 ']' 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.111 15:57:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:31.111 15:57:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.111 Malloc0 00:04:31.111 15:57:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.369 Malloc1 00:04:31.369 15:57:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.370 15:57:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.370 /dev/nbd0 00:04:31.370 15:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.370 15:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.370 1+0 records in 00:04:31.370 1+0 records out 00:04:31.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209462 s, 19.6 MB/s 00:04:31.370 15:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:31.628 15:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.628 15:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.628 15:57:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.628 /dev/nbd1 00:04:31.628 15:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.628 15:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.628 1+0 records in 00:04:31.628 1+0 records out 00:04:31.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237637 s, 17.2 MB/s 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:31.628 15:57:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:31.628 15:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.886 { 00:04:31.886 "nbd_device": "/dev/nbd0", 00:04:31.886 "bdev_name": "Malloc0" 00:04:31.886 }, 00:04:31.886 { 00:04:31.886 "nbd_device": "/dev/nbd1", 00:04:31.886 "bdev_name": "Malloc1" 00:04:31.886 } 00:04:31.886 ]' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.886 { 00:04:31.886 "nbd_device": "/dev/nbd0", 00:04:31.886 "bdev_name": "Malloc0" 00:04:31.886 }, 00:04:31.886 { 00:04:31.886 "nbd_device": "/dev/nbd1", 00:04:31.886 "bdev_name": "Malloc1" 00:04:31.886 } 00:04:31.886 ]' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.886 /dev/nbd1' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.886 /dev/nbd1' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.886 15:57:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.145 256+0 records in 00:04:32.145 256+0 records out 00:04:32.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106384 s, 98.6 MB/s 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.145 256+0 records in 00:04:32.145 256+0 records out 00:04:32.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141627 s, 74.0 MB/s 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.145 256+0 records in 00:04:32.145 256+0 records out 00:04:32.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148384 s, 70.7 MB/s 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.145 15:57:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.404 15:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.404 15:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.404 15:57:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.404 15:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.404 15:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.404 15:57:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.404 15:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.662 15:57:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.662 15:57:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.920 15:57:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.179 [2024-11-20 15:57:33.815925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.179 [2024-11-20 15:57:33.853361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.179 [2024-11-20 15:57:33.853362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.179 [2024-11-20 15:57:33.894450] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.179 [2024-11-20 15:57:33.894491] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.467 15:57:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2546754 /var/tmp/spdk-nbd.sock 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2546754 ']' 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:36.467 15:57:36 event.app_repeat -- event/event.sh@39 -- # killprocess 2546754 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2546754 ']' 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2546754 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546754 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546754' 00:04:36.467 killing process with pid 2546754 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2546754 00:04:36.467 15:57:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2546754 00:04:36.467 spdk_app_start is called in Round 0. 00:04:36.467 Shutdown signal received, stop current app iteration 00:04:36.467 Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 reinitialization... 00:04:36.467 spdk_app_start is called in Round 1. 00:04:36.467 Shutdown signal received, stop current app iteration 00:04:36.467 Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 reinitialization... 00:04:36.467 spdk_app_start is called in Round 2. 00:04:36.467 Shutdown signal received, stop current app iteration 00:04:36.467 Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 reinitialization... 00:04:36.467 spdk_app_start is called in Round 3. 00:04:36.467 Shutdown signal received, stop current app iteration 00:04:36.467 15:57:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:36.467 15:57:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:36.467 00:04:36.467 real 0m16.426s 00:04:36.467 user 0m36.183s 00:04:36.467 sys 0m2.518s 00:04:36.467 15:57:37 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.467 15:57:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.467 ************************************ 00:04:36.467 END TEST app_repeat 00:04:36.467 ************************************ 00:04:36.467 15:57:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:36.467 15:57:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.467 15:57:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.467 15:57:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.467 15:57:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.467 ************************************ 00:04:36.467 START TEST cpu_locks 00:04:36.467 ************************************ 00:04:36.467 15:57:37 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.467 * Looking for test storage... 00:04:36.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:36.467 15:57:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.467 15:57:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.467 15:57:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.467 15:57:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:36.467 15:57:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.727 15:57:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 15:57:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:36.727 15:57:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:36.727 15:57:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:36.727 15:57:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.727 15:57:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.727 ************************************ 00:04:36.727 START TEST default_locks 00:04:36.727 ************************************ 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2549758 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2549758 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2549758 ']' 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.727 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.727 [2024-11-20 15:57:37.401654] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:36.727 [2024-11-20 15:57:37.401696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549758 ] 00:04:36.727 [2024-11-20 15:57:37.478479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.727 [2024-11-20 15:57:37.521018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.986 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.986 15:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:36.986 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2549758 00:04:36.986 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2549758 00:04:36.986 15:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.554 lslocks: write error 00:04:37.554 15:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2549758 00:04:37.554 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2549758 ']' 00:04:37.554 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2549758 00:04:37.554 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:37.554 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.555 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549758 00:04:37.555 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.555 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.555 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549758' 00:04:37.555 killing process with pid 2549758 00:04:37.555 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2549758 00:04:37.555 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2549758 00:04:37.814 15:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2549758 00:04:37.814 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2549758 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2549758 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2549758 ']' 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2549758) - No such process 00:04:37.815 ERROR: process (pid: 2549758) is no longer running 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:37.815 00:04:37.815 real 0m1.224s 00:04:37.815 user 0m1.184s 00:04:37.815 sys 0m0.552s 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.815 15:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.815 ************************************ 00:04:37.815 END TEST default_locks 00:04:37.815 ************************************ 00:04:37.815 15:57:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:37.815 15:57:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.815 15:57:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.815 15:57:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.815 ************************************ 00:04:37.815 START TEST default_locks_via_rpc 00:04:37.815 ************************************ 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2550014 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2550014 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2550014 ']' 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.815 15:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.075 [2024-11-20 15:57:38.694195] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:38.075 [2024-11-20 15:57:38.694233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550014 ] 00:04:38.075 [2024-11-20 15:57:38.766941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.075 [2024-11-20 15:57:38.808508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.335 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2550014 00:04:38.336 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2550014 00:04:38.336 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.594 15:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2550014 00:04:38.594 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2550014 ']' 00:04:38.594 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2550014 00:04:38.594 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.594 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.594 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550014 00:04:38.853 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.854 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.854 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550014' 00:04:38.854 killing process with pid 2550014 00:04:38.854 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2550014 00:04:38.854 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2550014 00:04:39.113 00:04:39.113 real 0m1.130s 00:04:39.113 user 0m1.087s 00:04:39.113 sys 0m0.497s 00:04:39.113 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.113 15:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.113 ************************************ 00:04:39.113 END TEST default_locks_via_rpc 00:04:39.113 ************************************ 00:04:39.113 15:57:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:39.113 15:57:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.113 15:57:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.113 15:57:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.113 ************************************ 00:04:39.113 START TEST non_locking_app_on_locked_coremask 00:04:39.113 ************************************ 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2550268 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2550268 /var/tmp/spdk.sock 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550268 ']' 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.113 15:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.113 [2024-11-20 15:57:39.895707] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:39.113 [2024-11-20 15:57:39.895752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550268 ] 00:04:39.372 [2024-11-20 15:57:39.970992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.372 [2024-11-20 15:57:40.014531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2550285 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2550285 /var/tmp/spdk2.sock 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550285 ']' 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.631 15:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.631 [2024-11-20 15:57:40.295693] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:39.631 [2024-11-20 15:57:40.295746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550285 ] 00:04:39.631 [2024-11-20 15:57:40.388504] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.631 [2024-11-20 15:57:40.388527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.890 [2024-11-20 15:57:40.470540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.458 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.458 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.458 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2550268 00:04:40.458 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2550268 00:04:40.458 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.025 lslocks: write error 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2550268 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2550268 ']' 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2550268 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550268 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550268' 00:04:41.025 killing process with pid 2550268 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2550268 00:04:41.025 15:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2550268 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2550285 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2550285 ']' 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2550285 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550285 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550285' 00:04:41.593 killing process with pid 2550285 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2550285 00:04:41.593 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2550285 00:04:41.851 00:04:41.851 real 0m2.736s 00:04:41.851 user 0m2.864s 00:04:41.851 sys 0m0.925s 00:04:41.851 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.851 15:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.851 ************************************ 00:04:41.851 END TEST non_locking_app_on_locked_coremask 00:04:41.851 ************************************ 00:04:41.851 15:57:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:41.851 15:57:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.851 15:57:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.851 15:57:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.851 ************************************ 00:04:41.851 START TEST locking_app_on_unlocked_coremask 00:04:41.851 ************************************ 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2550772 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2550772 /var/tmp/spdk.sock 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550772 ']' 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.851 15:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.110 [2024-11-20 15:57:42.694692] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:42.110 [2024-11-20 15:57:42.694733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550772 ] 00:04:42.110 [2024-11-20 15:57:42.771345] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.110 [2024-11-20 15:57:42.771371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.110 [2024-11-20 15:57:42.814448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2550782 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2550782 /var/tmp/spdk2.sock 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550782 ']' 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.369 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.369 [2024-11-20 15:57:43.082673] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:42.369 [2024-11-20 15:57:43.082720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550782 ] 00:04:42.369 [2024-11-20 15:57:43.173057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.628 [2024-11-20 15:57:43.262296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.195 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.195 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.195 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2550782 00:04:43.195 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2550782 00:04:43.195 15:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.762 lslocks: write error 00:04:43.762 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2550772 00:04:43.762 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2550772 ']' 00:04:43.762 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2550772 00:04:43.762 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:43.762 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.762 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550772 00:04:44.021 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.021 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.021 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550772' 00:04:44.021 killing process with pid 2550772 00:04:44.021 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2550772 00:04:44.021 15:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2550772 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2550782 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2550782 ']' 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2550782 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550782 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550782' 00:04:44.589 killing process with pid 2550782 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2550782 00:04:44.589 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2550782 00:04:44.849 00:04:44.849 real 0m2.957s 00:04:44.849 user 0m3.132s 00:04:44.849 sys 0m0.964s 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.849 ************************************ 00:04:44.849 END TEST locking_app_on_unlocked_coremask 00:04:44.849 ************************************ 00:04:44.849 15:57:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:44.849 15:57:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.849 15:57:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.849 15:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.849 ************************************ 00:04:44.849 START TEST locking_app_on_locked_coremask 00:04:44.849 ************************************ 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2551272 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2551272 /var/tmp/spdk.sock 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2551272 ']' 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.849 15:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.109 [2024-11-20 15:57:45.724764] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:45.109 [2024-11-20 15:57:45.724808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551272 ] 00:04:45.109 [2024-11-20 15:57:45.784609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.109 [2024-11-20 15:57:45.828151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2551283 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2551283 /var/tmp/spdk2.sock 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2551283 /var/tmp/spdk2.sock 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2551283 /var/tmp/spdk2.sock 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2551283 ']' 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.368 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.368 [2024-11-20 15:57:46.101233] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:45.368 [2024-11-20 15:57:46.101280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551283 ] 00:04:45.368 [2024-11-20 15:57:46.193619] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2551272 has claimed it. 00:04:45.368 [2024-11-20 15:57:46.193661] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2551283) - No such process 00:04:45.936 ERROR: process (pid: 2551283) is no longer running 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2551272 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2551272 00:04:45.936 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.195 lslocks: write error 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2551272 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2551272 ']' 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2551272 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551272 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551272' 00:04:46.195 killing process with pid 2551272 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2551272 00:04:46.195 15:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2551272 00:04:46.454 00:04:46.455 real 0m1.591s 00:04:46.455 user 0m1.746s 00:04:46.455 sys 0m0.501s 00:04:46.455 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.455 15:57:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.455 ************************************ 00:04:46.455 END TEST locking_app_on_locked_coremask 00:04:46.455 ************************************ 00:04:46.713 15:57:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:46.713 15:57:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.713 15:57:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.713 15:57:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.713 ************************************ 00:04:46.713 START TEST locking_overlapped_coremask 00:04:46.713 ************************************ 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2551544 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2551544 /var/tmp/spdk.sock 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2551544 ']' 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.713 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.713 [2024-11-20 15:57:47.372572] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:46.713 [2024-11-20 15:57:47.372609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551544 ] 00:04:46.713 [2024-11-20 15:57:47.441301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.713 [2024-11-20 15:57:47.500326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.713 [2024-11-20 15:57:47.500440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.713 [2024-11-20 15:57:47.500439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2551707 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2551707 /var/tmp/spdk2.sock 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2551707 /var/tmp/spdk2.sock 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2551707 /var/tmp/spdk2.sock 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2551707 ']' 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.971 15:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.971 [2024-11-20 15:57:47.783915] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:46.971 [2024-11-20 15:57:47.783973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551707 ] 00:04:47.230 [2024-11-20 15:57:47.877519] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2551544 has claimed it. 00:04:47.230 [2024-11-20 15:57:47.877556] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2551707) - No such process 00:04:47.797 ERROR: process (pid: 2551707) is no longer running 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2551544 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2551544 ']' 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2551544 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551544 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551544' 00:04:47.797 killing process with pid 2551544 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2551544 00:04:47.797 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2551544 00:04:48.056 00:04:48.056 real 0m1.460s 00:04:48.056 user 0m4.093s 00:04:48.056 sys 0m0.413s 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 ************************************ 00:04:48.056 END TEST locking_overlapped_coremask 00:04:48.056 ************************************ 00:04:48.056 15:57:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:48.056 15:57:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.056 15:57:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.056 15:57:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 ************************************ 00:04:48.056 START TEST locking_overlapped_coremask_via_rpc 00:04:48.056 ************************************ 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2551816 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2551816 /var/tmp/spdk.sock 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2551816 ']' 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.056 15:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.314 [2024-11-20 15:57:48.906716] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:48.314 [2024-11-20 15:57:48.906756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551816 ] 00:04:48.314 [2024-11-20 15:57:48.983169] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.314 [2024-11-20 15:57:48.983195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.314 [2024-11-20 15:57:49.029415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.314 [2024-11-20 15:57:49.029522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.314 [2024-11-20 15:57:49.029523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.572 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.572 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.572 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2552034 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2552034 /var/tmp/spdk2.sock 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2552034 ']' 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.573 15:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.573 [2024-11-20 15:57:49.298083] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:48.573 [2024-11-20 15:57:49.298135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552034 ] 00:04:48.573 [2024-11-20 15:57:49.389812] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.573 [2024-11-20 15:57:49.389838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.832 [2024-11-20 15:57:49.478695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.832 [2024-11-20 15:57:49.481998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.832 [2024-11-20 15:57:49.481999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.399 [2024-11-20 15:57:50.155023] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2551816 has claimed it. 00:04:49.399 request: 00:04:49.399 { 00:04:49.399 "method": "framework_enable_cpumask_locks", 00:04:49.399 "req_id": 1 00:04:49.399 } 00:04:49.399 Got JSON-RPC error response 00:04:49.399 response: 00:04:49.399 { 00:04:49.399 "code": -32603, 00:04:49.399 "message": "Failed to claim CPU core: 2" 00:04:49.399 } 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2551816 /var/tmp/spdk.sock 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2551816 ']' 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.399 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2552034 /var/tmp/spdk2.sock 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2552034 ']' 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.657 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.915 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.915 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.915 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:49.915 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:49.916 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:49.916 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:49.916 00:04:49.916 real 0m1.729s 00:04:49.916 user 0m0.843s 00:04:49.916 sys 0m0.128s 00:04:49.916 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.916 15:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.916 ************************************ 00:04:49.916 END TEST locking_overlapped_coremask_via_rpc 00:04:49.916 ************************************ 00:04:49.916 15:57:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:49.916 15:57:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2551816 ]] 00:04:49.916 15:57:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2551816 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2551816 ']' 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2551816 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551816 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551816' 00:04:49.916 killing process with pid 2551816 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2551816 00:04:49.916 15:57:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2551816 00:04:50.174 15:57:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2552034 ]] 00:04:50.174 15:57:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2552034 00:04:50.174 15:57:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2552034 ']' 00:04:50.174 15:57:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2552034 00:04:50.174 15:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:50.174 15:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.174 15:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552034 00:04:50.433 15:57:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:50.433 15:57:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:50.433 15:57:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552034' 00:04:50.433 killing process with pid 2552034 00:04:50.433 15:57:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2552034 00:04:50.433 15:57:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2552034 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2551816 ]] 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2551816 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2551816 ']' 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2551816 00:04:50.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2551816) - No such process 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2551816 is not found' 00:04:50.693 Process with pid 2551816 is not found 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2552034 ]] 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2552034 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2552034 ']' 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2552034 00:04:50.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2552034) - No such process 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2552034 is not found' 00:04:50.693 Process with pid 2552034 is not found 00:04:50.693 15:57:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:50.693 00:04:50.693 real 0m14.210s 00:04:50.693 user 0m24.764s 00:04:50.693 sys 0m4.941s 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.693 15:57:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.693 ************************************ 00:04:50.693 END TEST cpu_locks 00:04:50.693 ************************************ 00:04:50.693 00:04:50.693 real 0m38.947s 00:04:50.693 user 1m14.128s 00:04:50.693 sys 0m8.460s 00:04:50.693 15:57:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.693 15:57:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.693 ************************************ 00:04:50.693 END TEST event 00:04:50.693 ************************************ 00:04:50.693 15:57:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:50.693 15:57:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.693 15:57:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.693 15:57:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.693 ************************************ 00:04:50.693 START TEST thread 00:04:50.693 ************************************ 00:04:50.693 15:57:51 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:50.953 * Looking for test storage... 00:04:50.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.953 15:57:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.953 15:57:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.953 15:57:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.953 15:57:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.953 15:57:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.953 15:57:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.953 15:57:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.953 15:57:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.953 15:57:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.953 15:57:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.953 15:57:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.953 15:57:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:50.953 15:57:51 thread -- scripts/common.sh@345 -- # : 1 00:04:50.953 15:57:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.953 15:57:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.953 15:57:51 thread -- scripts/common.sh@365 -- # decimal 1 00:04:50.953 15:57:51 thread -- scripts/common.sh@353 -- # local d=1 00:04:50.953 15:57:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.953 15:57:51 thread -- scripts/common.sh@355 -- # echo 1 00:04:50.953 15:57:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.953 15:57:51 thread -- scripts/common.sh@366 -- # decimal 2 00:04:50.953 15:57:51 thread -- scripts/common.sh@353 -- # local d=2 00:04:50.953 15:57:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.953 15:57:51 thread -- scripts/common.sh@355 -- # echo 2 00:04:50.953 15:57:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.953 15:57:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.953 15:57:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.953 15:57:51 thread -- scripts/common.sh@368 -- # return 0 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 15:57:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.953 15:57:51 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.953 ************************************ 00:04:50.953 START TEST thread_poller_perf 00:04:50.953 ************************************ 00:04:50.953 15:57:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:50.953 [2024-11-20 15:57:51.686843] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:50.954 [2024-11-20 15:57:51.686915] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552404 ] 00:04:50.954 [2024-11-20 15:57:51.764773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.212 [2024-11-20 15:57:51.807077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.212 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:52.150 [2024-11-20T14:57:52.987Z] ====================================== 00:04:52.150 [2024-11-20T14:57:52.987Z] busy:2309287412 (cyc) 00:04:52.150 [2024-11-20T14:57:52.987Z] total_run_count: 397000 00:04:52.150 [2024-11-20T14:57:52.987Z] tsc_hz: 2300000000 (cyc) 00:04:52.150 [2024-11-20T14:57:52.987Z] ====================================== 00:04:52.150 [2024-11-20T14:57:52.987Z] poller_cost: 5816 (cyc), 2528 (nsec) 00:04:52.150 00:04:52.150 real 0m1.186s 00:04:52.150 user 0m1.105s 00:04:52.150 sys 0m0.077s 00:04:52.150 15:57:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.150 15:57:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.150 ************************************ 00:04:52.150 END TEST thread_poller_perf 00:04:52.150 ************************************ 00:04:52.150 15:57:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:52.150 15:57:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:52.150 15:57:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.150 15:57:52 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.150 ************************************ 00:04:52.150 START TEST thread_poller_perf 00:04:52.150 ************************************ 00:04:52.150 15:57:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:52.150 [2024-11-20 15:57:52.944820] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:52.150 [2024-11-20 15:57:52.944881] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552628 ] 00:04:52.409 [2024-11-20 15:57:53.026935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.409 [2024-11-20 15:57:53.067961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.409 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:53.348 [2024-11-20T14:57:54.185Z] ====================================== 00:04:53.348 [2024-11-20T14:57:54.185Z] busy:2301817740 (cyc) 00:04:53.348 [2024-11-20T14:57:54.185Z] total_run_count: 5354000 00:04:53.348 [2024-11-20T14:57:54.185Z] tsc_hz: 2300000000 (cyc) 00:04:53.348 [2024-11-20T14:57:54.185Z] ====================================== 00:04:53.348 [2024-11-20T14:57:54.185Z] poller_cost: 429 (cyc), 186 (nsec) 00:04:53.348 00:04:53.348 real 0m1.183s 00:04:53.348 user 0m1.104s 00:04:53.348 sys 0m0.075s 00:04:53.348 15:57:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.348 15:57:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.348 ************************************ 00:04:53.348 END TEST thread_poller_perf 00:04:53.349 ************************************ 00:04:53.349 15:57:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:53.349 00:04:53.349 real 0m2.682s 00:04:53.349 user 0m2.368s 00:04:53.349 sys 0m0.330s 00:04:53.349 15:57:54 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.349 15:57:54 thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.349 ************************************ 00:04:53.349 END TEST thread 00:04:53.349 ************************************ 00:04:53.349 15:57:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:53.349 15:57:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:53.349 15:57:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.349 15:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.349 15:57:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.608 ************************************ 00:04:53.608 START TEST app_cmdline 00:04:53.608 ************************************ 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:53.608 * Looking for test storage... 00:04:53.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.608 15:57:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.608 --rc genhtml_branch_coverage=1 00:04:53.608 --rc genhtml_function_coverage=1 00:04:53.608 --rc genhtml_legend=1 00:04:53.608 --rc geninfo_all_blocks=1 00:04:53.608 --rc geninfo_unexecuted_blocks=1 00:04:53.608 00:04:53.608 ' 00:04:53.608 15:57:54 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.608 --rc genhtml_branch_coverage=1 00:04:53.608 --rc genhtml_function_coverage=1 00:04:53.608 --rc genhtml_legend=1 00:04:53.608 --rc geninfo_all_blocks=1 00:04:53.608 --rc geninfo_unexecuted_blocks=1 00:04:53.608 00:04:53.608 ' 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.609 --rc genhtml_branch_coverage=1 00:04:53.609 --rc genhtml_function_coverage=1 00:04:53.609 --rc genhtml_legend=1 00:04:53.609 --rc geninfo_all_blocks=1 00:04:53.609 --rc geninfo_unexecuted_blocks=1 00:04:53.609 00:04:53.609 ' 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.609 --rc genhtml_branch_coverage=1 00:04:53.609 --rc genhtml_function_coverage=1 00:04:53.609 --rc genhtml_legend=1 00:04:53.609 --rc geninfo_all_blocks=1 00:04:53.609 --rc geninfo_unexecuted_blocks=1 00:04:53.609 00:04:53.609 ' 00:04:53.609 15:57:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:53.609 15:57:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2552940 00:04:53.609 15:57:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2552940 00:04:53.609 15:57:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2552940 ']' 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.609 15:57:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.868 [2024-11-20 15:57:54.444248] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:53.868 [2024-11-20 15:57:54.444300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552940 ] 00:04:53.868 [2024-11-20 15:57:54.520964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.868 [2024-11-20 15:57:54.563573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.127 15:57:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.127 15:57:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:54.127 15:57:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:54.127 { 00:04:54.127 "version": "SPDK v25.01-pre git sha1 c1691a126", 00:04:54.127 "fields": { 00:04:54.127 "major": 25, 00:04:54.127 "minor": 1, 00:04:54.127 "patch": 0, 00:04:54.127 "suffix": "-pre", 00:04:54.127 "commit": "c1691a126" 00:04:54.127 } 00:04:54.127 } 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:54.387 15:57:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:54.387 15:57:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.387 15:57:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:54.387 15:57:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.387 15:57:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:54.387 15:57:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:54.387 15:57:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:54.387 request: 00:04:54.387 { 00:04:54.387 "method": "env_dpdk_get_mem_stats", 00:04:54.387 "req_id": 1 00:04:54.387 } 00:04:54.387 Got JSON-RPC error response 00:04:54.387 response: 00:04:54.387 { 00:04:54.387 "code": -32601, 00:04:54.387 "message": "Method not found" 00:04:54.387 } 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.387 15:57:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2552940 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2552940 ']' 00:04:54.387 15:57:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2552940 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552940 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552940' 00:04:54.647 killing process with pid 2552940 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 2552940 00:04:54.647 15:57:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 2552940 00:04:54.907 00:04:54.907 real 0m1.358s 00:04:54.907 user 0m1.581s 00:04:54.907 sys 0m0.444s 00:04:54.908 15:57:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.908 15:57:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:54.908 ************************************ 00:04:54.908 END TEST app_cmdline 00:04:54.908 ************************************ 00:04:54.908 15:57:55 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:54.908 15:57:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.908 15:57:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.908 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:54.908 ************************************ 00:04:54.908 START TEST version 00:04:54.908 ************************************ 00:04:54.908 15:57:55 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:54.908 * Looking for test storage... 00:04:54.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:54.908 15:57:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.908 15:57:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.908 15:57:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.167 15:57:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.167 15:57:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.167 15:57:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.167 15:57:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.167 15:57:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.167 15:57:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.167 15:57:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.167 15:57:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.167 15:57:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.167 15:57:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.167 15:57:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.167 15:57:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.167 15:57:55 version -- scripts/common.sh@344 -- # case "$op" in 00:04:55.167 15:57:55 version -- scripts/common.sh@345 -- # : 1 00:04:55.167 15:57:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.167 15:57:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.167 15:57:55 version -- scripts/common.sh@365 -- # decimal 1 00:04:55.167 15:57:55 version -- scripts/common.sh@353 -- # local d=1 00:04:55.167 15:57:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.167 15:57:55 version -- scripts/common.sh@355 -- # echo 1 00:04:55.167 15:57:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.167 15:57:55 version -- scripts/common.sh@366 -- # decimal 2 00:04:55.167 15:57:55 version -- scripts/common.sh@353 -- # local d=2 00:04:55.167 15:57:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.167 15:57:55 version -- scripts/common.sh@355 -- # echo 2 00:04:55.167 15:57:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.167 15:57:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.167 15:57:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.167 15:57:55 version -- scripts/common.sh@368 -- # return 0 00:04:55.167 15:57:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.167 15:57:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.168 --rc genhtml_branch_coverage=1 00:04:55.168 --rc genhtml_function_coverage=1 00:04:55.168 --rc genhtml_legend=1 00:04:55.168 --rc geninfo_all_blocks=1 00:04:55.168 --rc geninfo_unexecuted_blocks=1 00:04:55.168 00:04:55.168 ' 00:04:55.168 15:57:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.168 --rc genhtml_branch_coverage=1 00:04:55.168 --rc genhtml_function_coverage=1 00:04:55.168 --rc genhtml_legend=1 00:04:55.168 --rc geninfo_all_blocks=1 00:04:55.168 --rc geninfo_unexecuted_blocks=1 00:04:55.168 00:04:55.168 ' 00:04:55.168 15:57:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.168 --rc genhtml_branch_coverage=1 00:04:55.168 --rc genhtml_function_coverage=1 00:04:55.168 --rc genhtml_legend=1 00:04:55.168 --rc geninfo_all_blocks=1 00:04:55.168 --rc geninfo_unexecuted_blocks=1 00:04:55.168 00:04:55.168 ' 00:04:55.168 15:57:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.168 --rc genhtml_branch_coverage=1 00:04:55.168 --rc genhtml_function_coverage=1 00:04:55.168 --rc genhtml_legend=1 00:04:55.168 --rc geninfo_all_blocks=1 00:04:55.168 --rc geninfo_unexecuted_blocks=1 00:04:55.168 00:04:55.168 ' 00:04:55.168 15:57:55 version -- app/version.sh@17 -- # get_header_version major 00:04:55.168 15:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.168 15:57:55 version -- app/version.sh@17 -- # major=25 00:04:55.168 15:57:55 version -- app/version.sh@18 -- # get_header_version minor 00:04:55.168 15:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.168 15:57:55 version -- app/version.sh@18 -- # minor=1 00:04:55.168 15:57:55 version -- app/version.sh@19 -- # get_header_version patch 00:04:55.168 15:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.168 15:57:55 version -- app/version.sh@19 -- # patch=0 00:04:55.168 15:57:55 version -- app/version.sh@20 -- # get_header_version suffix 00:04:55.168 15:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:55.168 15:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.168 15:57:55 version -- app/version.sh@20 -- # suffix=-pre 00:04:55.168 15:57:55 version -- app/version.sh@22 -- # version=25.1 00:04:55.168 15:57:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:55.168 15:57:55 version -- app/version.sh@28 -- # version=25.1rc0 00:04:55.168 15:57:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:55.168 15:57:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:55.168 15:57:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:55.168 15:57:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:55.168 00:04:55.168 real 0m0.251s 00:04:55.168 user 0m0.150s 00:04:55.168 sys 0m0.146s 00:04:55.168 15:57:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.168 15:57:55 version -- common/autotest_common.sh@10 -- # set +x 00:04:55.168 ************************************ 00:04:55.168 END TEST version 00:04:55.168 ************************************ 00:04:55.168 15:57:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:55.168 15:57:55 -- spdk/autotest.sh@194 -- # uname -s 00:04:55.168 15:57:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:55.168 15:57:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:55.168 15:57:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:55.168 15:57:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:55.168 15:57:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.168 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:55.168 15:57:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:55.168 15:57:55 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:55.168 15:57:55 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:55.168 15:57:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:55.168 15:57:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.168 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:55.428 ************************************ 00:04:55.428 START TEST nvmf_tcp 00:04:55.428 ************************************ 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:55.428 * Looking for test storage... 00:04:55.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.428 15:57:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.428 15:57:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.428 --rc genhtml_branch_coverage=1 00:04:55.429 --rc genhtml_function_coverage=1 00:04:55.429 --rc genhtml_legend=1 00:04:55.429 --rc geninfo_all_blocks=1 00:04:55.429 --rc geninfo_unexecuted_blocks=1 00:04:55.429 00:04:55.429 ' 00:04:55.429 15:57:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.429 --rc genhtml_branch_coverage=1 00:04:55.429 --rc genhtml_function_coverage=1 00:04:55.429 --rc genhtml_legend=1 00:04:55.429 --rc geninfo_all_blocks=1 00:04:55.429 --rc geninfo_unexecuted_blocks=1 00:04:55.429 00:04:55.429 ' 00:04:55.429 15:57:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.429 --rc genhtml_branch_coverage=1 00:04:55.429 --rc genhtml_function_coverage=1 00:04:55.429 --rc genhtml_legend=1 00:04:55.429 --rc geninfo_all_blocks=1 00:04:55.429 --rc geninfo_unexecuted_blocks=1 00:04:55.429 00:04:55.429 ' 00:04:55.429 15:57:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.429 --rc genhtml_branch_coverage=1 00:04:55.429 --rc genhtml_function_coverage=1 00:04:55.429 --rc genhtml_legend=1 00:04:55.429 --rc geninfo_all_blocks=1 00:04:55.429 --rc geninfo_unexecuted_blocks=1 00:04:55.429 00:04:55.429 ' 00:04:55.429 15:57:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:55.429 15:57:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:55.429 15:57:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:55.429 15:57:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:55.429 15:57:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.429 15:57:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.429 ************************************ 00:04:55.429 START TEST nvmf_target_core 00:04:55.429 ************************************ 00:04:55.429 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:55.688 * Looking for test storage... 00:04:55.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.688 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.689 --rc genhtml_branch_coverage=1 00:04:55.689 --rc genhtml_function_coverage=1 00:04:55.689 --rc genhtml_legend=1 00:04:55.689 --rc geninfo_all_blocks=1 00:04:55.689 --rc geninfo_unexecuted_blocks=1 00:04:55.689 00:04:55.689 ' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.689 --rc genhtml_branch_coverage=1 00:04:55.689 --rc genhtml_function_coverage=1 00:04:55.689 --rc genhtml_legend=1 00:04:55.689 --rc geninfo_all_blocks=1 00:04:55.689 --rc geninfo_unexecuted_blocks=1 00:04:55.689 00:04:55.689 ' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.689 --rc genhtml_branch_coverage=1 00:04:55.689 --rc genhtml_function_coverage=1 00:04:55.689 --rc genhtml_legend=1 00:04:55.689 --rc geninfo_all_blocks=1 00:04:55.689 --rc geninfo_unexecuted_blocks=1 00:04:55.689 00:04:55.689 ' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.689 --rc genhtml_branch_coverage=1 00:04:55.689 --rc genhtml_function_coverage=1 00:04:55.689 --rc genhtml_legend=1 00:04:55.689 --rc geninfo_all_blocks=1 00:04:55.689 --rc geninfo_unexecuted_blocks=1 00:04:55.689 00:04:55.689 ' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:55.689 ************************************ 00:04:55.689 START TEST nvmf_abort 00:04:55.689 ************************************ 00:04:55.689 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:55.949 * Looking for test storage... 00:04:55.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:55.949 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.950 --rc genhtml_branch_coverage=1 00:04:55.950 --rc genhtml_function_coverage=1 00:04:55.950 --rc genhtml_legend=1 00:04:55.950 --rc geninfo_all_blocks=1 00:04:55.950 --rc geninfo_unexecuted_blocks=1 00:04:55.950 00:04:55.950 ' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.950 --rc genhtml_branch_coverage=1 00:04:55.950 --rc genhtml_function_coverage=1 00:04:55.950 --rc genhtml_legend=1 00:04:55.950 --rc geninfo_all_blocks=1 00:04:55.950 --rc geninfo_unexecuted_blocks=1 00:04:55.950 00:04:55.950 ' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.950 --rc genhtml_branch_coverage=1 00:04:55.950 --rc genhtml_function_coverage=1 00:04:55.950 --rc genhtml_legend=1 00:04:55.950 --rc geninfo_all_blocks=1 00:04:55.950 --rc geninfo_unexecuted_blocks=1 00:04:55.950 00:04:55.950 ' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.950 --rc genhtml_branch_coverage=1 00:04:55.950 --rc genhtml_function_coverage=1 00:04:55.950 --rc genhtml_legend=1 00:04:55.950 --rc geninfo_all_blocks=1 00:04:55.950 --rc geninfo_unexecuted_blocks=1 00:04:55.950 00:04:55.950 ' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:55.950 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:02.529 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:02.529 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:02.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:02.530 Found net devices under 0000:86:00.0: cvl_0_0 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:02.530 Found net devices under 0000:86:00.1: cvl_0_1 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:02.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:02.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:05:02.530 00:05:02.530 --- 10.0.0.2 ping statistics --- 00:05:02.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.530 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:02.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:02.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:05:02.530 00:05:02.530 --- 10.0.0.1 ping statistics --- 00:05:02.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.530 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2556612 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:02.530 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2556612 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2556612 ']' 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 [2024-11-20 15:58:02.743577] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:02.531 [2024-11-20 15:58:02.743623] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:02.531 [2024-11-20 15:58:02.823617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.531 [2024-11-20 15:58:02.867932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:02.531 [2024-11-20 15:58:02.867991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:02.531 [2024-11-20 15:58:02.867999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.531 [2024-11-20 15:58:02.868005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.531 [2024-11-20 15:58:02.868010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:02.531 [2024-11-20 15:58:02.869489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.531 [2024-11-20 15:58:02.869597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.531 [2024-11-20 15:58:02.869598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.531 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 [2024-11-20 15:58:03.007817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 Malloc0 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 Delay0 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 [2024-11-20 15:58:03.085894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.531 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:02.531 [2024-11-20 15:58:03.182130] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:05.064 Initializing NVMe Controllers 00:05:05.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:05.064 controller IO queue size 128 less than required 00:05:05.064 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:05.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:05.064 Initialization complete. Launching workers. 00:05:05.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36854 00:05:05.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36915, failed to submit 62 00:05:05.064 success 36858, unsuccessful 57, failed 0 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:05.064 rmmod nvme_tcp 00:05:05.064 rmmod nvme_fabrics 00:05:05.064 rmmod nvme_keyring 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2556612 ']' 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2556612 00:05:05.064 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2556612 ']' 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2556612 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2556612 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2556612' 00:05:05.065 killing process with pid 2556612 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2556612 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2556612 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.065 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:06.972 00:05:06.972 real 0m11.276s 00:05:06.972 user 0m11.802s 00:05:06.972 sys 0m5.440s 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.972 ************************************ 00:05:06.972 END TEST nvmf_abort 00:05:06.972 ************************************ 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:06.972 ************************************ 00:05:06.972 START TEST nvmf_ns_hotplug_stress 00:05:06.972 ************************************ 00:05:06.972 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:07.231 * Looking for test storage... 00:05:07.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.232 --rc genhtml_branch_coverage=1 00:05:07.232 --rc genhtml_function_coverage=1 00:05:07.232 --rc genhtml_legend=1 00:05:07.232 --rc geninfo_all_blocks=1 00:05:07.232 --rc geninfo_unexecuted_blocks=1 00:05:07.232 00:05:07.232 ' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.232 --rc genhtml_branch_coverage=1 00:05:07.232 --rc genhtml_function_coverage=1 00:05:07.232 --rc genhtml_legend=1 00:05:07.232 --rc geninfo_all_blocks=1 00:05:07.232 --rc geninfo_unexecuted_blocks=1 00:05:07.232 00:05:07.232 ' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.232 --rc genhtml_branch_coverage=1 00:05:07.232 --rc genhtml_function_coverage=1 00:05:07.232 --rc genhtml_legend=1 00:05:07.232 --rc geninfo_all_blocks=1 00:05:07.232 --rc geninfo_unexecuted_blocks=1 00:05:07.232 00:05:07.232 ' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.232 --rc genhtml_branch_coverage=1 00:05:07.232 --rc genhtml_function_coverage=1 00:05:07.232 --rc genhtml_legend=1 00:05:07.232 --rc geninfo_all_blocks=1 00:05:07.232 --rc geninfo_unexecuted_blocks=1 00:05:07.232 00:05:07.232 ' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.232 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.232 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:07.233 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:13.798 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:13.798 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.798 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:13.799 Found net devices under 0000:86:00.0: cvl_0_0 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:13.799 Found net devices under 0000:86:00.1: cvl_0_1 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:13.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:13.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:05:13.799 00:05:13.799 --- 10.0.0.2 ping statistics --- 00:05:13.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.799 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:13.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:13.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:05:13.799 00:05:13.799 --- 10.0.0.1 ping statistics --- 00:05:13.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.799 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:13.799 15:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2560637 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2560637 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2560637 ']' 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.799 [2024-11-20 15:58:14.079592] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:13.799 [2024-11-20 15:58:14.079644] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:13.799 [2024-11-20 15:58:14.160233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.799 [2024-11-20 15:58:14.203641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:13.799 [2024-11-20 15:58:14.203676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:13.799 [2024-11-20 15:58:14.203684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.799 [2024-11-20 15:58:14.203691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.799 [2024-11-20 15:58:14.203698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:13.799 [2024-11-20 15:58:14.205141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.799 [2024-11-20 15:58:14.205230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.799 [2024-11-20 15:58:14.205231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:13.799 [2024-11-20 15:58:14.523634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.799 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:14.057 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:14.315 [2024-11-20 15:58:14.913036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:14.315 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:14.315 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:14.573 Malloc0 00:05:14.573 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:14.831 Delay0 00:05:14.831 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.089 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:15.347 NULL1 00:05:15.347 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:15.605 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2561114 00:05:15.605 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:15.605 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:15.605 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.594 Read completed with error (sct=0, sc=11) 00:05:16.594 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.888 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:16.888 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:17.179 true 00:05:17.179 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:17.179 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.115 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.115 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:18.116 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:18.374 true 00:05:18.374 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:18.374 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.374 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.632 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:18.632 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:18.890 true 00:05:18.890 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:18.890 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.266 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.266 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:20.266 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:20.525 true 00:05:20.525 15:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:20.525 15:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.093 15:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.352 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:21.352 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:21.611 true 00:05:21.611 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:21.611 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.869 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.128 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:22.128 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:22.128 true 00:05:22.128 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:22.128 15:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.505 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.505 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:23.505 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:23.763 true 00:05:23.763 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:23.763 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.021 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.021 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:24.021 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:24.280 true 00:05:24.280 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:24.280 15:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.657 15:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.657 15:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:25.657 15:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:25.916 true 00:05:25.916 15:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:25.916 15:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.852 15:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.852 15:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:26.852 15:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:27.111 true 00:05:27.111 15:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:27.111 15:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.369 15:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.369 15:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:27.369 15:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:27.628 true 00:05:27.628 15:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:27.628 15:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.562 15:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.821 15:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:28.821 15:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:29.079 true 00:05:29.079 15:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:29.079 15:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.338 15:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.597 15:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:29.597 15:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:29.597 true 00:05:29.597 15:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:29.597 15:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.974 15:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.974 15:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:30.974 15:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:30.974 true 00:05:30.974 15:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:30.974 15:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.233 15:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.493 15:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:31.493 15:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:31.752 true 00:05:31.752 15:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:31.752 15:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.688 15:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.946 15:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:32.946 15:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:33.205 true 00:05:33.205 15:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:33.205 15:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.143 15:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.143 15:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:34.143 15:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:34.402 true 00:05:34.402 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:34.402 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.660 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.919 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:34.919 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:34.919 true 00:05:34.919 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:34.919 15:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.294 15:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.294 15:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:36.294 15:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:36.294 true 00:05:36.553 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:36.553 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.553 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.811 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:36.811 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:37.070 true 00:05:37.070 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:37.070 15:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.005 15:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.263 15:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:38.263 15:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:38.521 true 00:05:38.521 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:38.521 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.780 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.780 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:38.780 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:39.039 true 00:05:39.040 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:39.040 15:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.974 15:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.232 15:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:40.232 15:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:40.491 true 00:05:40.491 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:40.491 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.751 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.751 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:40.751 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:41.009 true 00:05:41.009 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:41.009 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.268 15:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.527 15:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:41.527 15:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:41.786 true 00:05:41.786 15:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:41.786 15:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.722 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.722 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:42.722 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:42.980 true 00:05:42.980 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:42.980 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.980 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.239 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:43.239 15:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:43.497 true 00:05:43.497 15:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:43.497 15:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.874 15:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.874 15:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:44.874 15:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:45.133 true 00:05:45.133 15:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:45.133 15:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.070 15:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.070 Initializing NVMe Controllers 00:05:46.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:46.070 Controller IO queue size 128, less than required. 00:05:46.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:46.070 Controller IO queue size 128, less than required. 00:05:46.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:46.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:46.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:46.070 Initialization complete. Launching workers. 00:05:46.070 ======================================================== 00:05:46.070 Latency(us) 00:05:46.070 Device Information : IOPS MiB/s Average min max 00:05:46.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1487.65 0.73 55114.85 1217.75 1094237.79 00:05:46.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16282.10 7.95 7861.55 2122.21 384528.55 00:05:46.070 ======================================================== 00:05:46.070 Total : 17769.75 8.68 11817.51 1217.75 1094237.79 00:05:46.070 00:05:46.070 15:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:46.070 15:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:46.328 true 00:05:46.328 15:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2561114 00:05:46.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2561114) - No such process 00:05:46.328 15:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2561114 00:05:46.328 15:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.587 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.587 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:46.587 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:46.587 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:46.587 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.587 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:46.846 null0 00:05:46.846 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.846 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.846 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:47.105 null1 00:05:47.105 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.105 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.105 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:47.105 null2 00:05:47.364 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.364 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.364 15:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:47.364 null3 00:05:47.364 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.364 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.364 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:47.623 null4 00:05:47.623 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.623 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.623 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:47.881 null5 00:05:47.881 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.881 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.881 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:48.140 null6 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:48.140 null7 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2566722 2566723 2566725 2566727 2566729 2566731 2566732 2566734 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.140 15:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.398 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.656 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.914 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.172 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.173 15:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.432 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.691 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.948 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.205 15:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.463 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.722 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.980 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.238 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.239 15:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.497 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.755 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.014 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.273 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.273 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.273 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.273 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.273 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.273 15:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.273 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.273 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:52.531 rmmod nvme_tcp 00:05:52.531 rmmod nvme_fabrics 00:05:52.531 rmmod nvme_keyring 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2560637 ']' 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2560637 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2560637 ']' 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2560637 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.531 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560637 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560637' 00:05:52.790 killing process with pid 2560637 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2560637 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2560637 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:52.790 15:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:55.328 00:05:55.328 real 0m47.824s 00:05:55.328 user 3m16.037s 00:05:55.328 sys 0m15.759s 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:55.328 ************************************ 00:05:55.328 END TEST nvmf_ns_hotplug_stress 00:05:55.328 ************************************ 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.328 ************************************ 00:05:55.328 START TEST nvmf_delete_subsystem 00:05:55.328 ************************************ 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:55.328 * Looking for test storage... 00:05:55.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:55.328 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.329 --rc genhtml_branch_coverage=1 00:05:55.329 --rc genhtml_function_coverage=1 00:05:55.329 --rc genhtml_legend=1 00:05:55.329 --rc geninfo_all_blocks=1 00:05:55.329 --rc geninfo_unexecuted_blocks=1 00:05:55.329 00:05:55.329 ' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.329 --rc genhtml_branch_coverage=1 00:05:55.329 --rc genhtml_function_coverage=1 00:05:55.329 --rc genhtml_legend=1 00:05:55.329 --rc geninfo_all_blocks=1 00:05:55.329 --rc geninfo_unexecuted_blocks=1 00:05:55.329 00:05:55.329 ' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.329 --rc genhtml_branch_coverage=1 00:05:55.329 --rc genhtml_function_coverage=1 00:05:55.329 --rc genhtml_legend=1 00:05:55.329 --rc geninfo_all_blocks=1 00:05:55.329 --rc geninfo_unexecuted_blocks=1 00:05:55.329 00:05:55.329 ' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.329 --rc genhtml_branch_coverage=1 00:05:55.329 --rc genhtml_function_coverage=1 00:05:55.329 --rc genhtml_legend=1 00:05:55.329 --rc geninfo_all_blocks=1 00:05:55.329 --rc geninfo_unexecuted_blocks=1 00:05:55.329 00:05:55.329 ' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.329 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.330 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.330 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.330 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:55.330 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:55.330 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:55.330 15:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:01.935 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:01.936 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:01.936 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:01.936 Found net devices under 0000:86:00.0: cvl_0_0 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:01.936 Found net devices under 0000:86:00.1: cvl_0_1 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:01.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:01.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:06:01.936 00:06:01.936 --- 10.0.0.2 ping statistics --- 00:06:01.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.936 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:01.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:01.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:06:01.936 00:06:01.936 --- 10.0.0.1 ping statistics --- 00:06:01.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.936 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:01.936 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2571123 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2571123 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2571123 ']' 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.937 15:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 [2024-11-20 15:59:01.986629] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:01.937 [2024-11-20 15:59:01.986673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.937 [2024-11-20 15:59:02.065408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.937 [2024-11-20 15:59:02.107564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.937 [2024-11-20 15:59:02.107599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.937 [2024-11-20 15:59:02.107606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.937 [2024-11-20 15:59:02.107612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.937 [2024-11-20 15:59:02.107617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.937 [2024-11-20 15:59:02.108783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.937 [2024-11-20 15:59:02.108785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 [2024-11-20 15:59:02.246529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 [2024-11-20 15:59:02.266730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 NULL1 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 Delay0 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2571150 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:01.937 15:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:01.937 [2024-11-20 15:59:02.378552] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:03.977 15:59:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:03.977 15:59:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.977 15:59:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 starting I/O failed: -6 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 [2024-11-20 15:59:04.497190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18df4a0 is same with the state(6) to be set 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Write completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.977 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 starting I/O failed: -6 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Read completed with error (sct=0, sc=8) 00:06:03.978 Write completed with error (sct=0, sc=8) 00:06:03.978 [2024-11-20 15:59:04.498085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c2000d4d0 is same with the state(6) to be set 00:06:04.912 [2024-11-20 15:59:05.472892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e09a0 is same with the state(6) to be set 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 [2024-11-20 15:59:05.498191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18df680 is same with the state(6) to be set 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.912 Write completed with error (sct=0, sc=8) 00:06:04.912 [2024-11-20 15:59:05.500550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c20000c40 is same with the state(6) to be set 00:06:04.912 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 [2024-11-20 15:59:05.500715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c2000d800 is same with the state(6) to be set 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Write completed with error (sct=0, sc=8) 00:06:04.913 Read completed with error (sct=0, sc=8) 00:06:04.913 [2024-11-20 15:59:05.501335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c2000d020 is same with the state(6) to be set 00:06:04.913 Initializing NVMe Controllers 00:06:04.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:04.913 Controller IO queue size 128, less than required. 00:06:04.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:04.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:04.913 Initialization complete. Launching workers. 00:06:04.913 ======================================================== 00:06:04.913 Latency(us) 00:06:04.913 Device Information : IOPS MiB/s Average min max 00:06:04.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.07 0.08 876174.23 248.12 1009789.02 00:06:04.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.47 0.08 1050047.74 649.69 2001703.65 00:06:04.913 ======================================================== 00:06:04.913 Total : 327.54 0.16 967728.26 248.12 2001703.65 00:06:04.913 00:06:04.913 [2024-11-20 15:59:05.501913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e09a0 (9): Bad file descriptor 00:06:04.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:04.913 15:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.913 15:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:04.913 15:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2571150 00:06:04.913 15:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:05.479 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:05.479 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2571150 00:06:05.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2571150) - No such process 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2571150 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2571150 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2571150 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.480 [2024-11-20 15:59:06.030797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2571842 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:05.480 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.480 [2024-11-20 15:59:06.120791] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:05.738 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.738 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:05.738 15:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.304 15:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.304 15:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:06.304 15:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.871 15:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.871 15:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:06.871 15:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.436 15:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.437 15:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:07.437 15:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.002 15:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.002 15:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:08.002 15:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.260 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.260 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:08.260 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.826 Initializing NVMe Controllers 00:06:08.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:08.826 Controller IO queue size 128, less than required. 00:06:08.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:08.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:08.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:08.826 Initialization complete. Launching workers. 00:06:08.826 ======================================================== 00:06:08.826 Latency(us) 00:06:08.826 Device Information : IOPS MiB/s Average min max 00:06:08.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002760.29 1000127.33 1007159.03 00:06:08.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005398.79 1000221.08 1041927.65 00:06:08.826 ======================================================== 00:06:08.826 Total : 256.00 0.12 1004079.54 1000127.33 1041927.65 00:06:08.826 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2571842 00:06:08.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2571842) - No such process 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2571842 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:08.826 rmmod nvme_tcp 00:06:08.826 rmmod nvme_fabrics 00:06:08.826 rmmod nvme_keyring 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2571123 ']' 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2571123 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2571123 ']' 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2571123 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.826 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2571123 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2571123' 00:06:09.086 killing process with pid 2571123 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2571123 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2571123 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.086 15:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:11.623 00:06:11.623 real 0m16.228s 00:06:11.623 user 0m29.397s 00:06:11.623 sys 0m5.471s 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.623 ************************************ 00:06:11.623 END TEST nvmf_delete_subsystem 00:06:11.623 ************************************ 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.623 15:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:11.623 ************************************ 00:06:11.623 START TEST nvmf_host_management 00:06:11.623 ************************************ 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:11.623 * Looking for test storage... 00:06:11.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.623 --rc genhtml_branch_coverage=1 00:06:11.623 --rc genhtml_function_coverage=1 00:06:11.623 --rc genhtml_legend=1 00:06:11.623 --rc geninfo_all_blocks=1 00:06:11.623 --rc geninfo_unexecuted_blocks=1 00:06:11.623 00:06:11.623 ' 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.623 --rc genhtml_branch_coverage=1 00:06:11.623 --rc genhtml_function_coverage=1 00:06:11.623 --rc genhtml_legend=1 00:06:11.623 --rc geninfo_all_blocks=1 00:06:11.623 --rc geninfo_unexecuted_blocks=1 00:06:11.623 00:06:11.623 ' 00:06:11.623 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.623 --rc genhtml_branch_coverage=1 00:06:11.623 --rc genhtml_function_coverage=1 00:06:11.623 --rc genhtml_legend=1 00:06:11.623 --rc geninfo_all_blocks=1 00:06:11.623 --rc geninfo_unexecuted_blocks=1 00:06:11.623 00:06:11.623 ' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.624 --rc genhtml_branch_coverage=1 00:06:11.624 --rc genhtml_function_coverage=1 00:06:11.624 --rc genhtml_legend=1 00:06:11.624 --rc geninfo_all_blocks=1 00:06:11.624 --rc geninfo_unexecuted_blocks=1 00:06:11.624 00:06:11.624 ' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:11.624 15:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:18.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:18.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:18.196 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:18.197 Found net devices under 0000:86:00.0: cvl_0_0 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:18.197 Found net devices under 0000:86:00.1: cvl_0_1 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.197 15:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:18.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:06:18.197 00:06:18.197 --- 10.0.0.2 ping statistics --- 00:06:18.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.197 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:06:18.197 00:06:18.197 --- 10.0.0.1 ping statistics --- 00:06:18.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.197 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2576071 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2576071 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2576071 ']' 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.197 15:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.197 [2024-11-20 15:59:18.334978] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:18.197 [2024-11-20 15:59:18.335028] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.197 [2024-11-20 15:59:18.417713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.197 [2024-11-20 15:59:18.459089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:18.197 [2024-11-20 15:59:18.459129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:18.197 [2024-11-20 15:59:18.459137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.197 [2024-11-20 15:59:18.459144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.197 [2024-11-20 15:59:18.459150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:18.197 [2024-11-20 15:59:18.460836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.197 [2024-11-20 15:59:18.460945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.197 [2024-11-20 15:59:18.461033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.197 [2024-11-20 15:59:18.461033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 [2024-11-20 15:59:19.217033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.457 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.457 Malloc0 00:06:18.457 [2024-11-20 15:59:19.289630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2576260 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2576260 /var/tmp/bdevperf.sock 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2576260 ']' 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:18.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:18.716 { 00:06:18.716 "params": { 00:06:18.716 "name": "Nvme$subsystem", 00:06:18.716 "trtype": "$TEST_TRANSPORT", 00:06:18.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:18.716 "adrfam": "ipv4", 00:06:18.716 "trsvcid": "$NVMF_PORT", 00:06:18.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:18.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:18.716 "hdgst": ${hdgst:-false}, 00:06:18.716 "ddgst": ${ddgst:-false} 00:06:18.716 }, 00:06:18.716 "method": "bdev_nvme_attach_controller" 00:06:18.716 } 00:06:18.716 EOF 00:06:18.716 )") 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:18.716 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:18.716 "params": { 00:06:18.716 "name": "Nvme0", 00:06:18.716 "trtype": "tcp", 00:06:18.716 "traddr": "10.0.0.2", 00:06:18.716 "adrfam": "ipv4", 00:06:18.716 "trsvcid": "4420", 00:06:18.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:18.716 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:18.716 "hdgst": false, 00:06:18.716 "ddgst": false 00:06:18.716 }, 00:06:18.716 "method": "bdev_nvme_attach_controller" 00:06:18.716 }' 00:06:18.716 [2024-11-20 15:59:19.386708] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:18.716 [2024-11-20 15:59:19.386756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576260 ] 00:06:18.716 [2024-11-20 15:59:19.463659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.716 [2024-11-20 15:59:19.505441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.974 Running I/O for 10 seconds... 00:06:18.974 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.975 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.232 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.232 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:19.232 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:19.232 15:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=705 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 705 -ge 100 ']' 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.491 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.491 [2024-11-20 15:59:20.152847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.152901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.152918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.152934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.152954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.152977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.152992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.152998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.491 [2024-11-20 15:59:20.153233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.491 [2024-11-20 15:59:20.153241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.492 [2024-11-20 15:59:20.153788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.492 [2024-11-20 15:59:20.153796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.493 [2024-11-20 15:59:20.153803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.493 [2024-11-20 15:59:20.153811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.493 [2024-11-20 15:59:20.153817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.493 [2024-11-20 15:59:20.153825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.493 [2024-11-20 15:59:20.153831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.493 [2024-11-20 15:59:20.153839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.493 [2024-11-20 15:59:20.153846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.493 [2024-11-20 15:59:20.153853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f810 is same with the state(6) to be set 00:06:19.493 [2024-11-20 15:59:20.154834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:19.493 task offset: 100608 on job bdev=Nvme0n1 fails 00:06:19.493 00:06:19.493 Latency(us) 00:06:19.493 [2024-11-20T14:59:20.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.493 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:19.493 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:19.493 Verification LBA range: start 0x0 length 0x400 00:06:19.493 Nvme0n1 : 0.41 1893.95 118.37 157.83 0.00 30345.32 1624.15 27468.13 00:06:19.493 [2024-11-20T14:59:20.330Z] =================================================================================================================== 00:06:19.493 [2024-11-20T14:59:20.330Z] Total : 1893.95 118.37 157.83 0.00 30345.32 1624.15 27468.13 00:06:19.493 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.493 [2024-11-20 15:59:20.157267] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.493 [2024-11-20 15:59:20.157291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c86500 (9): Bad file descriptor 00:06:19.493 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:19.493 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.493 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.493 [2024-11-20 15:59:20.164349] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:19.493 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.493 15:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2576260 00:06:20.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2576260) - No such process 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:20.425 { 00:06:20.425 "params": { 00:06:20.425 "name": "Nvme$subsystem", 00:06:20.425 "trtype": "$TEST_TRANSPORT", 00:06:20.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:20.425 "adrfam": "ipv4", 00:06:20.425 "trsvcid": "$NVMF_PORT", 00:06:20.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:20.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:20.425 "hdgst": ${hdgst:-false}, 00:06:20.425 "ddgst": ${ddgst:-false} 00:06:20.425 }, 00:06:20.425 "method": "bdev_nvme_attach_controller" 00:06:20.425 } 00:06:20.425 EOF 00:06:20.425 )") 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:20.425 15:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:20.425 "params": { 00:06:20.425 "name": "Nvme0", 00:06:20.425 "trtype": "tcp", 00:06:20.425 "traddr": "10.0.0.2", 00:06:20.425 "adrfam": "ipv4", 00:06:20.425 "trsvcid": "4420", 00:06:20.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:20.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:20.425 "hdgst": false, 00:06:20.425 "ddgst": false 00:06:20.425 }, 00:06:20.425 "method": "bdev_nvme_attach_controller" 00:06:20.425 }' 00:06:20.425 [2024-11-20 15:59:21.219514] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:20.425 [2024-11-20 15:59:21.219563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576592 ] 00:06:20.683 [2024-11-20 15:59:21.300059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.683 [2024-11-20 15:59:21.339475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.683 Running I/O for 1 seconds... 00:06:22.054 1984.00 IOPS, 124.00 MiB/s 00:06:22.054 Latency(us) 00:06:22.054 [2024-11-20T14:59:22.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.054 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:22.054 Verification LBA range: start 0x0 length 0x400 00:06:22.054 Nvme0n1 : 1.03 1995.19 124.70 0.00 0.00 31570.83 6154.69 27468.13 00:06:22.054 [2024-11-20T14:59:22.891Z] =================================================================================================================== 00:06:22.054 [2024-11-20T14:59:22.891Z] Total : 1995.19 124.70 0.00 0.00 31570.83 6154.69 27468.13 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.054 rmmod nvme_tcp 00:06:22.054 rmmod nvme_fabrics 00:06:22.054 rmmod nvme_keyring 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2576071 ']' 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2576071 00:06:22.054 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2576071 ']' 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2576071 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2576071 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2576071' 00:06:22.055 killing process with pid 2576071 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2576071 00:06:22.055 15:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2576071 00:06:22.313 [2024-11-20 15:59:22.981356] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.313 15:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:24.850 00:06:24.850 real 0m13.071s 00:06:24.850 user 0m22.115s 00:06:24.850 sys 0m5.747s 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.850 ************************************ 00:06:24.850 END TEST nvmf_host_management 00:06:24.850 ************************************ 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.850 ************************************ 00:06:24.850 START TEST nvmf_lvol 00:06:24.850 ************************************ 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:24.850 * Looking for test storage... 00:06:24.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.850 --rc genhtml_branch_coverage=1 00:06:24.850 --rc genhtml_function_coverage=1 00:06:24.850 --rc genhtml_legend=1 00:06:24.850 --rc geninfo_all_blocks=1 00:06:24.850 --rc geninfo_unexecuted_blocks=1 00:06:24.850 00:06:24.850 ' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.850 --rc genhtml_branch_coverage=1 00:06:24.850 --rc genhtml_function_coverage=1 00:06:24.850 --rc genhtml_legend=1 00:06:24.850 --rc geninfo_all_blocks=1 00:06:24.850 --rc geninfo_unexecuted_blocks=1 00:06:24.850 00:06:24.850 ' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.850 --rc genhtml_branch_coverage=1 00:06:24.850 --rc genhtml_function_coverage=1 00:06:24.850 --rc genhtml_legend=1 00:06:24.850 --rc geninfo_all_blocks=1 00:06:24.850 --rc geninfo_unexecuted_blocks=1 00:06:24.850 00:06:24.850 ' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.850 --rc genhtml_branch_coverage=1 00:06:24.850 --rc genhtml_function_coverage=1 00:06:24.850 --rc genhtml_legend=1 00:06:24.850 --rc geninfo_all_blocks=1 00:06:24.850 --rc geninfo_unexecuted_blocks=1 00:06:24.850 00:06:24.850 ' 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.850 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.851 15:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.422 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:31.423 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:31.423 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:31.423 Found net devices under 0000:86:00.0: cvl_0_0 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:31.423 Found net devices under 0000:86:00.1: cvl_0_1 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:06:31.423 00:06:31.423 --- 10.0.0.2 ping statistics --- 00:06:31.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.423 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:06:31.423 00:06:31.423 --- 10.0.0.1 ping statistics --- 00:06:31.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.423 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.423 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2580372 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2580372 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2580372 ']' 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:31.424 [2024-11-20 15:59:31.427020] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:31.424 [2024-11-20 15:59:31.427068] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.424 [2024-11-20 15:59:31.507956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.424 [2024-11-20 15:59:31.548121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.424 [2024-11-20 15:59:31.548159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.424 [2024-11-20 15:59:31.548166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.424 [2024-11-20 15:59:31.548171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.424 [2024-11-20 15:59:31.548177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.424 [2024-11-20 15:59:31.549544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.424 [2024-11-20 15:59:31.549653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.424 [2024-11-20 15:59:31.549654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:31.424 [2024-11-20 15:59:31.868241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.424 15:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:31.424 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:31.424 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:31.682 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:31.682 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:31.940 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:31.940 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8907ce3e-c177-4917-89a3-b95053b4f543 00:06:31.940 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8907ce3e-c177-4917-89a3-b95053b4f543 lvol 20 00:06:32.197 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a5ba8cf8-98e0-422d-8f5e-e421a345d5ac 00:06:32.197 15:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:32.455 15:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5ba8cf8-98e0-422d-8f5e-e421a345d5ac 00:06:32.713 15:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:32.713 [2024-11-20 15:59:33.533329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.970 15:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.970 15:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2580857 00:06:32.970 15:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:32.970 15:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:34.341 15:59:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a5ba8cf8-98e0-422d-8f5e-e421a345d5ac MY_SNAPSHOT 00:06:34.341 15:59:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7e61ad32-ebbd-4f13-aedc-5d11bb70e36e 00:06:34.341 15:59:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a5ba8cf8-98e0-422d-8f5e-e421a345d5ac 30 00:06:34.599 15:59:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7e61ad32-ebbd-4f13-aedc-5d11bb70e36e MY_CLONE 00:06:34.857 15:59:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c919ff31-cf95-407f-9534-647de0023557 00:06:34.857 15:59:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c919ff31-cf95-407f-9534-647de0023557 00:06:35.422 15:59:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2580857 00:06:43.533 Initializing NVMe Controllers 00:06:43.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:43.533 Controller IO queue size 128, less than required. 00:06:43.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:43.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:43.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:43.533 Initialization complete. Launching workers. 00:06:43.533 ======================================================== 00:06:43.533 Latency(us) 00:06:43.533 Device Information : IOPS MiB/s Average min max 00:06:43.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11991.80 46.84 10674.30 487.39 119844.93 00:06:43.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11846.60 46.28 10808.99 3183.48 51996.56 00:06:43.533 ======================================================== 00:06:43.533 Total : 23838.40 93.12 10741.23 487.39 119844.93 00:06:43.533 00:06:43.533 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:43.533 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a5ba8cf8-98e0-422d-8f5e-e421a345d5ac 00:06:43.791 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8907ce3e-c177-4917-89a3-b95053b4f543 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:44.048 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:44.049 rmmod nvme_tcp 00:06:44.049 rmmod nvme_fabrics 00:06:44.049 rmmod nvme_keyring 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2580372 ']' 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2580372 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2580372 ']' 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2580372 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2580372 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2580372' 00:06:44.049 killing process with pid 2580372 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2580372 00:06:44.049 15:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2580372 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.307 15:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.844 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:46.844 00:06:46.844 real 0m22.000s 00:06:46.844 user 1m3.244s 00:06:46.844 sys 0m7.676s 00:06:46.844 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.844 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.844 ************************************ 00:06:46.844 END TEST nvmf_lvol 00:06:46.844 ************************************ 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.845 ************************************ 00:06:46.845 START TEST nvmf_lvs_grow 00:06:46.845 ************************************ 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:46.845 * Looking for test storage... 00:06:46.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.845 --rc genhtml_branch_coverage=1 00:06:46.845 --rc genhtml_function_coverage=1 00:06:46.845 --rc genhtml_legend=1 00:06:46.845 --rc geninfo_all_blocks=1 00:06:46.845 --rc geninfo_unexecuted_blocks=1 00:06:46.845 00:06:46.845 ' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.845 --rc genhtml_branch_coverage=1 00:06:46.845 --rc genhtml_function_coverage=1 00:06:46.845 --rc genhtml_legend=1 00:06:46.845 --rc geninfo_all_blocks=1 00:06:46.845 --rc geninfo_unexecuted_blocks=1 00:06:46.845 00:06:46.845 ' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.845 --rc genhtml_branch_coverage=1 00:06:46.845 --rc genhtml_function_coverage=1 00:06:46.845 --rc genhtml_legend=1 00:06:46.845 --rc geninfo_all_blocks=1 00:06:46.845 --rc geninfo_unexecuted_blocks=1 00:06:46.845 00:06:46.845 ' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.845 --rc genhtml_branch_coverage=1 00:06:46.845 --rc genhtml_function_coverage=1 00:06:46.845 --rc genhtml_legend=1 00:06:46.845 --rc geninfo_all_blocks=1 00:06:46.845 --rc geninfo_unexecuted_blocks=1 00:06:46.845 00:06:46.845 ' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.845 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:46.846 15:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:53.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:53.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:53.412 Found net devices under 0000:86:00.0: cvl_0_0 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.412 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:53.413 Found net devices under 0000:86:00.1: cvl_0_1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:06:53.413 00:06:53.413 --- 10.0.0.2 ping statistics --- 00:06:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.413 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:06:53.413 00:06:53.413 --- 10.0.0.1 ping statistics --- 00:06:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.413 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2586250 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2586250 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2586250 ']' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.413 [2024-11-20 15:59:53.516254] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:53.413 [2024-11-20 15:59:53.516297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.413 [2024-11-20 15:59:53.593212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.413 [2024-11-20 15:59:53.632510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.413 [2024-11-20 15:59:53.632545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.413 [2024-11-20 15:59:53.632552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.413 [2024-11-20 15:59:53.632557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.413 [2024-11-20 15:59:53.632562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.413 [2024-11-20 15:59:53.633111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:53.413 [2024-11-20 15:59:53.949922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.413 ************************************ 00:06:53.413 START TEST lvs_grow_clean 00:06:53.413 ************************************ 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:53.413 15:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.413 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.413 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:53.413 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:53.414 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:53.672 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8bed3cad-2925-45e2-911a-9e94b99a74d9 00:06:53.672 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:06:53.672 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:53.930 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:53.930 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:53.930 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 lvol 150 00:06:54.188 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=546b5fa3-1114-4cbb-be78-a29af3c50bc5 00:06:54.188 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:54.188 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:54.188 [2024-11-20 15:59:54.971871] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:54.188 [2024-11-20 15:59:54.971922] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:54.188 true 00:06:54.188 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:06:54.188 15:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:54.446 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:54.446 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:54.704 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 546b5fa3-1114-4cbb-be78-a29af3c50bc5 00:06:54.963 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:54.963 [2024-11-20 15:59:55.730140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.963 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2586750 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2586750 /var/tmp/bdevperf.sock 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2586750 ']' 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:55.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.221 15:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:55.221 [2024-11-20 15:59:55.980261] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:55.221 [2024-11-20 15:59:55.980309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586750 ] 00:06:55.480 [2024-11-20 15:59:56.057018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.480 [2024-11-20 15:59:56.098669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.046 15:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.046 15:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:56.046 15:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:56.613 Nvme0n1 00:06:56.613 15:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:56.613 [ 00:06:56.613 { 00:06:56.613 "name": "Nvme0n1", 00:06:56.613 "aliases": [ 00:06:56.613 "546b5fa3-1114-4cbb-be78-a29af3c50bc5" 00:06:56.613 ], 00:06:56.613 "product_name": "NVMe disk", 00:06:56.613 "block_size": 4096, 00:06:56.613 "num_blocks": 38912, 00:06:56.613 "uuid": "546b5fa3-1114-4cbb-be78-a29af3c50bc5", 00:06:56.613 "numa_id": 1, 00:06:56.613 "assigned_rate_limits": { 00:06:56.613 "rw_ios_per_sec": 0, 00:06:56.613 "rw_mbytes_per_sec": 0, 00:06:56.613 "r_mbytes_per_sec": 0, 00:06:56.613 "w_mbytes_per_sec": 0 00:06:56.613 }, 00:06:56.613 "claimed": false, 00:06:56.613 "zoned": false, 00:06:56.613 "supported_io_types": { 00:06:56.613 "read": true, 00:06:56.613 "write": true, 00:06:56.613 "unmap": true, 00:06:56.613 "flush": true, 00:06:56.613 "reset": true, 00:06:56.613 "nvme_admin": true, 00:06:56.613 "nvme_io": true, 00:06:56.613 "nvme_io_md": false, 00:06:56.613 "write_zeroes": true, 00:06:56.613 "zcopy": false, 00:06:56.613 "get_zone_info": false, 00:06:56.613 "zone_management": false, 00:06:56.613 "zone_append": false, 00:06:56.613 "compare": true, 00:06:56.613 "compare_and_write": true, 00:06:56.613 "abort": true, 00:06:56.613 "seek_hole": false, 00:06:56.613 "seek_data": false, 00:06:56.613 "copy": true, 00:06:56.613 "nvme_iov_md": false 00:06:56.613 }, 00:06:56.613 "memory_domains": [ 00:06:56.613 { 00:06:56.613 "dma_device_id": "system", 00:06:56.613 "dma_device_type": 1 00:06:56.613 } 00:06:56.613 ], 00:06:56.613 "driver_specific": { 00:06:56.613 "nvme": [ 00:06:56.613 { 00:06:56.613 "trid": { 00:06:56.613 "trtype": "TCP", 00:06:56.613 "adrfam": "IPv4", 00:06:56.613 "traddr": "10.0.0.2", 00:06:56.613 "trsvcid": "4420", 00:06:56.613 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:56.613 }, 00:06:56.613 "ctrlr_data": { 00:06:56.613 "cntlid": 1, 00:06:56.613 "vendor_id": "0x8086", 00:06:56.613 "model_number": "SPDK bdev Controller", 00:06:56.613 "serial_number": "SPDK0", 00:06:56.613 "firmware_revision": "25.01", 00:06:56.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.613 "oacs": { 00:06:56.613 "security": 0, 00:06:56.613 "format": 0, 00:06:56.613 "firmware": 0, 00:06:56.613 "ns_manage": 0 00:06:56.613 }, 00:06:56.613 "multi_ctrlr": true, 00:06:56.613 "ana_reporting": false 00:06:56.613 }, 00:06:56.613 "vs": { 00:06:56.613 "nvme_version": "1.3" 00:06:56.613 }, 00:06:56.613 "ns_data": { 00:06:56.613 "id": 1, 00:06:56.613 "can_share": true 00:06:56.613 } 00:06:56.613 } 00:06:56.613 ], 00:06:56.613 "mp_policy": "active_passive" 00:06:56.613 } 00:06:56.613 } 00:06:56.613 ] 00:06:56.614 15:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2586984 00:06:56.614 15:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:56.614 15:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:56.872 Running I/O for 10 seconds... 00:06:57.807 Latency(us) 00:06:57.807 [2024-11-20T14:59:58.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.807 Nvme0n1 : 1.00 21646.00 84.55 0.00 0.00 0.00 0.00 0.00 00:06:57.807 [2024-11-20T14:59:58.644Z] =================================================================================================================== 00:06:57.807 [2024-11-20T14:59:58.644Z] Total : 21646.00 84.55 0.00 0.00 0.00 0.00 0.00 00:06:57.807 00:06:58.741 15:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:06:58.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.741 Nvme0n1 : 2.00 21807.00 85.18 0.00 0.00 0.00 0.00 0.00 00:06:58.741 [2024-11-20T14:59:59.579Z] =================================================================================================================== 00:06:58.742 [2024-11-20T14:59:59.579Z] Total : 21807.00 85.18 0.00 0.00 0.00 0.00 0.00 00:06:58.742 00:06:58.999 true 00:06:58.999 15:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:06:58.999 15:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:59.257 15:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:59.257 15:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:59.257 15:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2586984 00:06:59.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.822 Nvme0n1 : 3.00 21730.00 84.88 0.00 0.00 0.00 0.00 0.00 00:06:59.822 [2024-11-20T15:00:00.659Z] =================================================================================================================== 00:06:59.822 [2024-11-20T15:00:00.659Z] Total : 21730.00 84.88 0.00 0.00 0.00 0.00 0.00 00:06:59.822 00:07:00.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.756 Nvme0n1 : 4.00 21803.50 85.17 0.00 0.00 0.00 0.00 0.00 00:07:00.756 [2024-11-20T15:00:01.593Z] =================================================================================================================== 00:07:00.756 [2024-11-20T15:00:01.593Z] Total : 21803.50 85.17 0.00 0.00 0.00 0.00 0.00 00:07:00.756 00:07:01.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.749 Nvme0n1 : 5.00 21810.80 85.20 0.00 0.00 0.00 0.00 0.00 00:07:01.749 [2024-11-20T15:00:02.586Z] =================================================================================================================== 00:07:01.749 [2024-11-20T15:00:02.586Z] Total : 21810.80 85.20 0.00 0.00 0.00 0.00 0.00 00:07:01.749 00:07:03.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.123 Nvme0n1 : 6.00 21859.67 85.39 0.00 0.00 0.00 0.00 0.00 00:07:03.123 [2024-11-20T15:00:03.960Z] =================================================================================================================== 00:07:03.123 [2024-11-20T15:00:03.960Z] Total : 21859.67 85.39 0.00 0.00 0.00 0.00 0.00 00:07:03.123 00:07:04.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.058 Nvme0n1 : 7.00 21875.14 85.45 0.00 0.00 0.00 0.00 0.00 00:07:04.058 [2024-11-20T15:00:04.895Z] =================================================================================================================== 00:07:04.058 [2024-11-20T15:00:04.895Z] Total : 21875.14 85.45 0.00 0.00 0.00 0.00 0.00 00:07:04.058 00:07:04.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.995 Nvme0n1 : 8.00 21886.75 85.50 0.00 0.00 0.00 0.00 0.00 00:07:04.995 [2024-11-20T15:00:05.832Z] =================================================================================================================== 00:07:04.995 [2024-11-20T15:00:05.832Z] Total : 21886.75 85.50 0.00 0.00 0.00 0.00 0.00 00:07:04.995 00:07:05.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.930 Nvme0n1 : 9.00 21895.78 85.53 0.00 0.00 0.00 0.00 0.00 00:07:05.930 [2024-11-20T15:00:06.767Z] =================================================================================================================== 00:07:05.930 [2024-11-20T15:00:06.767Z] Total : 21895.78 85.53 0.00 0.00 0.00 0.00 0.00 00:07:05.930 00:07:06.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.866 Nvme0n1 : 10.00 21902.20 85.56 0.00 0.00 0.00 0.00 0.00 00:07:06.866 [2024-11-20T15:00:07.703Z] =================================================================================================================== 00:07:06.866 [2024-11-20T15:00:07.703Z] Total : 21902.20 85.56 0.00 0.00 0.00 0.00 0.00 00:07:06.866 00:07:06.866 00:07:06.866 Latency(us) 00:07:06.866 [2024-11-20T15:00:07.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.866 Nvme0n1 : 10.01 21902.19 85.56 0.00 0.00 5840.01 4473.54 15272.74 00:07:06.866 [2024-11-20T15:00:07.703Z] =================================================================================================================== 00:07:06.866 [2024-11-20T15:00:07.703Z] Total : 21902.19 85.56 0.00 0.00 5840.01 4473.54 15272.74 00:07:06.866 { 00:07:06.866 "results": [ 00:07:06.866 { 00:07:06.866 "job": "Nvme0n1", 00:07:06.866 "core_mask": "0x2", 00:07:06.866 "workload": "randwrite", 00:07:06.866 "status": "finished", 00:07:06.866 "queue_depth": 128, 00:07:06.866 "io_size": 4096, 00:07:06.866 "runtime": 10.005485, 00:07:06.866 "iops": 21902.186650622134, 00:07:06.866 "mibps": 85.55541660399271, 00:07:06.866 "io_failed": 0, 00:07:06.866 "io_timeout": 0, 00:07:06.866 "avg_latency_us": 5840.013946946451, 00:07:06.866 "min_latency_us": 4473.544347826087, 00:07:06.866 "max_latency_us": 15272.737391304348 00:07:06.866 } 00:07:06.866 ], 00:07:06.866 "core_count": 1 00:07:06.866 } 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2586750 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2586750 ']' 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2586750 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586750 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586750' 00:07:06.866 killing process with pid 2586750 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2586750 00:07:06.866 Received shutdown signal, test time was about 10.000000 seconds 00:07:06.866 00:07:06.866 Latency(us) 00:07:06.866 [2024-11-20T15:00:07.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.866 [2024-11-20T15:00:07.703Z] =================================================================================================================== 00:07:06.866 [2024-11-20T15:00:07.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:06.866 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2586750 00:07:07.125 16:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:07.384 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.642 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:07.642 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:07.642 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:07.642 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:07.642 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:07.901 [2024-11-20 16:00:08.603665] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.901 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:08.160 request: 00:07:08.160 { 00:07:08.160 "uuid": "8bed3cad-2925-45e2-911a-9e94b99a74d9", 00:07:08.160 "method": "bdev_lvol_get_lvstores", 00:07:08.160 "req_id": 1 00:07:08.160 } 00:07:08.160 Got JSON-RPC error response 00:07:08.160 response: 00:07:08.160 { 00:07:08.160 "code": -19, 00:07:08.160 "message": "No such device" 00:07:08.160 } 00:07:08.160 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:08.160 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.160 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.160 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.160 16:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.419 aio_bdev 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 546b5fa3-1114-4cbb-be78-a29af3c50bc5 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=546b5fa3-1114-4cbb-be78-a29af3c50bc5 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:08.419 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 546b5fa3-1114-4cbb-be78-a29af3c50bc5 -t 2000 00:07:08.678 [ 00:07:08.678 { 00:07:08.678 "name": "546b5fa3-1114-4cbb-be78-a29af3c50bc5", 00:07:08.678 "aliases": [ 00:07:08.678 "lvs/lvol" 00:07:08.678 ], 00:07:08.678 "product_name": "Logical Volume", 00:07:08.678 "block_size": 4096, 00:07:08.678 "num_blocks": 38912, 00:07:08.678 "uuid": "546b5fa3-1114-4cbb-be78-a29af3c50bc5", 00:07:08.678 "assigned_rate_limits": { 00:07:08.678 "rw_ios_per_sec": 0, 00:07:08.678 "rw_mbytes_per_sec": 0, 00:07:08.678 "r_mbytes_per_sec": 0, 00:07:08.678 "w_mbytes_per_sec": 0 00:07:08.678 }, 00:07:08.678 "claimed": false, 00:07:08.678 "zoned": false, 00:07:08.678 "supported_io_types": { 00:07:08.678 "read": true, 00:07:08.678 "write": true, 00:07:08.678 "unmap": true, 00:07:08.678 "flush": false, 00:07:08.678 "reset": true, 00:07:08.678 "nvme_admin": false, 00:07:08.678 "nvme_io": false, 00:07:08.678 "nvme_io_md": false, 00:07:08.678 "write_zeroes": true, 00:07:08.678 "zcopy": false, 00:07:08.678 "get_zone_info": false, 00:07:08.678 "zone_management": false, 00:07:08.678 "zone_append": false, 00:07:08.678 "compare": false, 00:07:08.678 "compare_and_write": false, 00:07:08.678 "abort": false, 00:07:08.678 "seek_hole": true, 00:07:08.678 "seek_data": true, 00:07:08.678 "copy": false, 00:07:08.678 "nvme_iov_md": false 00:07:08.678 }, 00:07:08.678 "driver_specific": { 00:07:08.678 "lvol": { 00:07:08.678 "lvol_store_uuid": "8bed3cad-2925-45e2-911a-9e94b99a74d9", 00:07:08.678 "base_bdev": "aio_bdev", 00:07:08.678 "thin_provision": false, 00:07:08.678 "num_allocated_clusters": 38, 00:07:08.678 "snapshot": false, 00:07:08.678 "clone": false, 00:07:08.678 "esnap_clone": false 00:07:08.678 } 00:07:08.678 } 00:07:08.678 } 00:07:08.678 ] 00:07:08.678 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:08.678 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:08.678 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:08.937 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:08.937 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:08.937 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:09.196 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:09.196 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 546b5fa3-1114-4cbb-be78-a29af3c50bc5 00:07:09.196 16:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8bed3cad-2925-45e2-911a-9e94b99a74d9 00:07:09.455 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.714 00:07:09.714 real 0m16.394s 00:07:09.714 user 0m16.031s 00:07:09.714 sys 0m1.590s 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:09.714 ************************************ 00:07:09.714 END TEST lvs_grow_clean 00:07:09.714 ************************************ 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.714 ************************************ 00:07:09.714 START TEST lvs_grow_dirty 00:07:09.714 ************************************ 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.714 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.973 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:09.973 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:10.231 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:10.231 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:10.231 16:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.490 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.490 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.490 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca lvol 150 00:07:10.490 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=424184be-f90c-4f39-9519-d77a76c01f20 00:07:10.490 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.490 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:10.749 [2024-11-20 16:00:11.481963] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:10.749 [2024-11-20 16:00:11.482035] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:10.749 true 00:07:10.749 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:10.749 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.008 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.008 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.266 16:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 424184be-f90c-4f39-9519-d77a76c01f20 00:07:11.266 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:11.524 [2024-11-20 16:00:12.200121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.524 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2590092 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2590092 /var/tmp/bdevperf.sock 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2590092 ']' 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:11.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.783 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.783 [2024-11-20 16:00:12.444959] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:11.783 [2024-11-20 16:00:12.445010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590092 ] 00:07:11.783 [2024-11-20 16:00:12.518646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.784 [2024-11-20 16:00:12.561120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.042 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.042 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:12.042 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:12.301 Nvme0n1 00:07:12.301 16:00:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:12.301 [ 00:07:12.301 { 00:07:12.301 "name": "Nvme0n1", 00:07:12.301 "aliases": [ 00:07:12.301 "424184be-f90c-4f39-9519-d77a76c01f20" 00:07:12.301 ], 00:07:12.301 "product_name": "NVMe disk", 00:07:12.301 "block_size": 4096, 00:07:12.301 "num_blocks": 38912, 00:07:12.301 "uuid": "424184be-f90c-4f39-9519-d77a76c01f20", 00:07:12.301 "numa_id": 1, 00:07:12.301 "assigned_rate_limits": { 00:07:12.301 "rw_ios_per_sec": 0, 00:07:12.301 "rw_mbytes_per_sec": 0, 00:07:12.301 "r_mbytes_per_sec": 0, 00:07:12.301 "w_mbytes_per_sec": 0 00:07:12.301 }, 00:07:12.301 "claimed": false, 00:07:12.301 "zoned": false, 00:07:12.301 "supported_io_types": { 00:07:12.301 "read": true, 00:07:12.301 "write": true, 00:07:12.301 "unmap": true, 00:07:12.301 "flush": true, 00:07:12.301 "reset": true, 00:07:12.301 "nvme_admin": true, 00:07:12.301 "nvme_io": true, 00:07:12.301 "nvme_io_md": false, 00:07:12.301 "write_zeroes": true, 00:07:12.301 "zcopy": false, 00:07:12.301 "get_zone_info": false, 00:07:12.301 "zone_management": false, 00:07:12.301 "zone_append": false, 00:07:12.301 "compare": true, 00:07:12.301 "compare_and_write": true, 00:07:12.301 "abort": true, 00:07:12.301 "seek_hole": false, 00:07:12.301 "seek_data": false, 00:07:12.301 "copy": true, 00:07:12.301 "nvme_iov_md": false 00:07:12.301 }, 00:07:12.301 "memory_domains": [ 00:07:12.301 { 00:07:12.301 "dma_device_id": "system", 00:07:12.301 "dma_device_type": 1 00:07:12.301 } 00:07:12.301 ], 00:07:12.301 "driver_specific": { 00:07:12.301 "nvme": [ 00:07:12.301 { 00:07:12.301 "trid": { 00:07:12.301 "trtype": "TCP", 00:07:12.301 "adrfam": "IPv4", 00:07:12.301 "traddr": "10.0.0.2", 00:07:12.301 "trsvcid": "4420", 00:07:12.301 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:12.301 }, 00:07:12.301 "ctrlr_data": { 00:07:12.301 "cntlid": 1, 00:07:12.301 "vendor_id": "0x8086", 00:07:12.301 "model_number": "SPDK bdev Controller", 00:07:12.301 "serial_number": "SPDK0", 00:07:12.301 "firmware_revision": "25.01", 00:07:12.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.301 "oacs": { 00:07:12.301 "security": 0, 00:07:12.301 "format": 0, 00:07:12.301 "firmware": 0, 00:07:12.301 "ns_manage": 0 00:07:12.301 }, 00:07:12.301 "multi_ctrlr": true, 00:07:12.301 "ana_reporting": false 00:07:12.301 }, 00:07:12.301 "vs": { 00:07:12.301 "nvme_version": "1.3" 00:07:12.301 }, 00:07:12.301 "ns_data": { 00:07:12.301 "id": 1, 00:07:12.301 "can_share": true 00:07:12.301 } 00:07:12.301 } 00:07:12.301 ], 00:07:12.301 "mp_policy": "active_passive" 00:07:12.301 } 00:07:12.301 } 00:07:12.301 ] 00:07:12.301 16:00:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2590111 00:07:12.301 16:00:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:12.301 16:00:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:12.560 Running I/O for 10 seconds... 00:07:13.496 Latency(us) 00:07:13.496 [2024-11-20T15:00:14.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.496 Nvme0n1 : 1.00 22549.00 88.08 0.00 0.00 0.00 0.00 0.00 00:07:13.496 [2024-11-20T15:00:14.333Z] =================================================================================================================== 00:07:13.496 [2024-11-20T15:00:14.333Z] Total : 22549.00 88.08 0.00 0.00 0.00 0.00 0.00 00:07:13.496 00:07:14.432 16:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:14.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.432 Nvme0n1 : 2.00 22769.50 88.94 0.00 0.00 0.00 0.00 0.00 00:07:14.432 [2024-11-20T15:00:15.269Z] =================================================================================================================== 00:07:14.432 [2024-11-20T15:00:15.269Z] Total : 22769.50 88.94 0.00 0.00 0.00 0.00 0.00 00:07:14.432 00:07:14.691 true 00:07:14.691 16:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:14.691 16:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:14.691 16:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:14.691 16:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:14.691 16:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2590111 00:07:15.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.627 Nvme0n1 : 3.00 22802.33 89.07 0.00 0.00 0.00 0.00 0.00 00:07:15.627 [2024-11-20T15:00:16.464Z] =================================================================================================================== 00:07:15.627 [2024-11-20T15:00:16.464Z] Total : 22802.33 89.07 0.00 0.00 0.00 0.00 0.00 00:07:15.627 00:07:16.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.561 Nvme0n1 : 4.00 22859.00 89.29 0.00 0.00 0.00 0.00 0.00 00:07:16.561 [2024-11-20T15:00:17.398Z] =================================================================================================================== 00:07:16.561 [2024-11-20T15:00:17.398Z] Total : 22859.00 89.29 0.00 0.00 0.00 0.00 0.00 00:07:16.561 00:07:17.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.497 Nvme0n1 : 5.00 22901.20 89.46 0.00 0.00 0.00 0.00 0.00 00:07:17.497 [2024-11-20T15:00:18.334Z] =================================================================================================================== 00:07:17.497 [2024-11-20T15:00:18.334Z] Total : 22901.20 89.46 0.00 0.00 0.00 0.00 0.00 00:07:17.497 00:07:18.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.433 Nvme0n1 : 6.00 22919.50 89.53 0.00 0.00 0.00 0.00 0.00 00:07:18.433 [2024-11-20T15:00:19.270Z] =================================================================================================================== 00:07:18.433 [2024-11-20T15:00:19.270Z] Total : 22919.50 89.53 0.00 0.00 0.00 0.00 0.00 00:07:18.433 00:07:19.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.369 Nvme0n1 : 7.00 22922.14 89.54 0.00 0.00 0.00 0.00 0.00 00:07:19.369 [2024-11-20T15:00:20.206Z] =================================================================================================================== 00:07:19.369 [2024-11-20T15:00:20.206Z] Total : 22922.14 89.54 0.00 0.00 0.00 0.00 0.00 00:07:19.369 00:07:20.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.745 Nvme0n1 : 8.00 22910.00 89.49 0.00 0.00 0.00 0.00 0.00 00:07:20.745 [2024-11-20T15:00:21.582Z] =================================================================================================================== 00:07:20.745 [2024-11-20T15:00:21.582Z] Total : 22910.00 89.49 0.00 0.00 0.00 0.00 0.00 00:07:20.745 00:07:21.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.680 Nvme0n1 : 9.00 22931.89 89.58 0.00 0.00 0.00 0.00 0.00 00:07:21.680 [2024-11-20T15:00:22.517Z] =================================================================================================================== 00:07:21.680 [2024-11-20T15:00:22.517Z] Total : 22931.89 89.58 0.00 0.00 0.00 0.00 0.00 00:07:21.680 00:07:22.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.617 Nvme0n1 : 10.00 22953.30 89.66 0.00 0.00 0.00 0.00 0.00 00:07:22.617 [2024-11-20T15:00:23.454Z] =================================================================================================================== 00:07:22.617 [2024-11-20T15:00:23.454Z] Total : 22953.30 89.66 0.00 0.00 0.00 0.00 0.00 00:07:22.617 00:07:22.617 00:07:22.617 Latency(us) 00:07:22.617 [2024-11-20T15:00:23.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.617 Nvme0n1 : 10.00 22961.02 89.69 0.00 0.00 5571.81 3219.81 13905.03 00:07:22.617 [2024-11-20T15:00:23.454Z] =================================================================================================================== 00:07:22.617 [2024-11-20T15:00:23.454Z] Total : 22961.02 89.69 0.00 0.00 5571.81 3219.81 13905.03 00:07:22.617 { 00:07:22.617 "results": [ 00:07:22.617 { 00:07:22.617 "job": "Nvme0n1", 00:07:22.617 "core_mask": "0x2", 00:07:22.617 "workload": "randwrite", 00:07:22.617 "status": "finished", 00:07:22.617 "queue_depth": 128, 00:07:22.617 "io_size": 4096, 00:07:22.617 "runtime": 10.002211, 00:07:22.617 "iops": 22961.023317744446, 00:07:22.617 "mibps": 89.69149733493924, 00:07:22.617 "io_failed": 0, 00:07:22.617 "io_timeout": 0, 00:07:22.617 "avg_latency_us": 5571.806401654765, 00:07:22.617 "min_latency_us": 3219.8121739130434, 00:07:22.617 "max_latency_us": 13905.029565217392 00:07:22.617 } 00:07:22.617 ], 00:07:22.617 "core_count": 1 00:07:22.617 } 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2590092 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2590092 ']' 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2590092 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590092 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590092' 00:07:22.617 killing process with pid 2590092 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2590092 00:07:22.617 Received shutdown signal, test time was about 10.000000 seconds 00:07:22.617 00:07:22.617 Latency(us) 00:07:22.617 [2024-11-20T15:00:23.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.617 [2024-11-20T15:00:23.454Z] =================================================================================================================== 00:07:22.617 [2024-11-20T15:00:23.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2590092 00:07:22.617 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:22.876 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.134 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:23.134 16:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2586250 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2586250 00:07:23.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2586250 Killed "${NVMF_APP[@]}" "$@" 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2591954 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2591954 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2591954 ']' 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.394 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.394 [2024-11-20 16:00:24.147239] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:23.394 [2024-11-20 16:00:24.147289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.394 [2024-11-20 16:00:24.227434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.652 [2024-11-20 16:00:24.268190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.652 [2024-11-20 16:00:24.268225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.652 [2024-11-20 16:00:24.268233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.652 [2024-11-20 16:00:24.268239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.652 [2024-11-20 16:00:24.268244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.652 [2024-11-20 16:00:24.268761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.652 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.910 [2024-11-20 16:00:24.578700] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:23.910 [2024-11-20 16:00:24.578782] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:23.910 [2024-11-20 16:00:24.578808] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 424184be-f90c-4f39-9519-d77a76c01f20 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=424184be-f90c-4f39-9519-d77a76c01f20 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.910 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:24.168 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 424184be-f90c-4f39-9519-d77a76c01f20 -t 2000 00:07:24.168 [ 00:07:24.168 { 00:07:24.168 "name": "424184be-f90c-4f39-9519-d77a76c01f20", 00:07:24.168 "aliases": [ 00:07:24.168 "lvs/lvol" 00:07:24.168 ], 00:07:24.168 "product_name": "Logical Volume", 00:07:24.168 "block_size": 4096, 00:07:24.168 "num_blocks": 38912, 00:07:24.168 "uuid": "424184be-f90c-4f39-9519-d77a76c01f20", 00:07:24.168 "assigned_rate_limits": { 00:07:24.168 "rw_ios_per_sec": 0, 00:07:24.168 "rw_mbytes_per_sec": 0, 00:07:24.168 "r_mbytes_per_sec": 0, 00:07:24.168 "w_mbytes_per_sec": 0 00:07:24.168 }, 00:07:24.168 "claimed": false, 00:07:24.168 "zoned": false, 00:07:24.168 "supported_io_types": { 00:07:24.168 "read": true, 00:07:24.168 "write": true, 00:07:24.168 "unmap": true, 00:07:24.168 "flush": false, 00:07:24.168 "reset": true, 00:07:24.168 "nvme_admin": false, 00:07:24.168 "nvme_io": false, 00:07:24.168 "nvme_io_md": false, 00:07:24.168 "write_zeroes": true, 00:07:24.168 "zcopy": false, 00:07:24.168 "get_zone_info": false, 00:07:24.168 "zone_management": false, 00:07:24.168 "zone_append": false, 00:07:24.168 "compare": false, 00:07:24.168 "compare_and_write": false, 00:07:24.168 "abort": false, 00:07:24.168 "seek_hole": true, 00:07:24.168 "seek_data": true, 00:07:24.168 "copy": false, 00:07:24.168 "nvme_iov_md": false 00:07:24.168 }, 00:07:24.168 "driver_specific": { 00:07:24.168 "lvol": { 00:07:24.168 "lvol_store_uuid": "c96aaff7-9a7e-46c3-81d1-7736cf53fcca", 00:07:24.168 "base_bdev": "aio_bdev", 00:07:24.168 "thin_provision": false, 00:07:24.168 "num_allocated_clusters": 38, 00:07:24.168 "snapshot": false, 00:07:24.168 "clone": false, 00:07:24.168 "esnap_clone": false 00:07:24.168 } 00:07:24.168 } 00:07:24.168 } 00:07:24.168 ] 00:07:24.168 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:24.168 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:24.168 16:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:24.425 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:24.425 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:24.425 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:24.683 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:24.683 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.683 [2024-11-20 16:00:25.511401] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:24.941 request: 00:07:24.941 { 00:07:24.941 "uuid": "c96aaff7-9a7e-46c3-81d1-7736cf53fcca", 00:07:24.941 "method": "bdev_lvol_get_lvstores", 00:07:24.941 "req_id": 1 00:07:24.941 } 00:07:24.941 Got JSON-RPC error response 00:07:24.941 response: 00:07:24.941 { 00:07:24.941 "code": -19, 00:07:24.941 "message": "No such device" 00:07:24.941 } 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.941 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.198 aio_bdev 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 424184be-f90c-4f39-9519-d77a76c01f20 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=424184be-f90c-4f39-9519-d77a76c01f20 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.198 16:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:25.455 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 424184be-f90c-4f39-9519-d77a76c01f20 -t 2000 00:07:25.711 [ 00:07:25.711 { 00:07:25.711 "name": "424184be-f90c-4f39-9519-d77a76c01f20", 00:07:25.711 "aliases": [ 00:07:25.711 "lvs/lvol" 00:07:25.711 ], 00:07:25.711 "product_name": "Logical Volume", 00:07:25.711 "block_size": 4096, 00:07:25.711 "num_blocks": 38912, 00:07:25.711 "uuid": "424184be-f90c-4f39-9519-d77a76c01f20", 00:07:25.711 "assigned_rate_limits": { 00:07:25.711 "rw_ios_per_sec": 0, 00:07:25.711 "rw_mbytes_per_sec": 0, 00:07:25.711 "r_mbytes_per_sec": 0, 00:07:25.711 "w_mbytes_per_sec": 0 00:07:25.711 }, 00:07:25.711 "claimed": false, 00:07:25.711 "zoned": false, 00:07:25.711 "supported_io_types": { 00:07:25.711 "read": true, 00:07:25.711 "write": true, 00:07:25.711 "unmap": true, 00:07:25.711 "flush": false, 00:07:25.711 "reset": true, 00:07:25.711 "nvme_admin": false, 00:07:25.711 "nvme_io": false, 00:07:25.711 "nvme_io_md": false, 00:07:25.711 "write_zeroes": true, 00:07:25.711 "zcopy": false, 00:07:25.711 "get_zone_info": false, 00:07:25.711 "zone_management": false, 00:07:25.711 "zone_append": false, 00:07:25.711 "compare": false, 00:07:25.711 "compare_and_write": false, 00:07:25.711 "abort": false, 00:07:25.711 "seek_hole": true, 00:07:25.711 "seek_data": true, 00:07:25.711 "copy": false, 00:07:25.711 "nvme_iov_md": false 00:07:25.711 }, 00:07:25.711 "driver_specific": { 00:07:25.711 "lvol": { 00:07:25.711 "lvol_store_uuid": "c96aaff7-9a7e-46c3-81d1-7736cf53fcca", 00:07:25.711 "base_bdev": "aio_bdev", 00:07:25.711 "thin_provision": false, 00:07:25.711 "num_allocated_clusters": 38, 00:07:25.711 "snapshot": false, 00:07:25.711 "clone": false, 00:07:25.711 "esnap_clone": false 00:07:25.711 } 00:07:25.711 } 00:07:25.711 } 00:07:25.711 ] 00:07:25.711 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:25.711 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:25.711 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:25.711 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:25.711 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:25.711 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:25.969 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:25.969 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 424184be-f90c-4f39-9519-d77a76c01f20 00:07:26.226 16:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c96aaff7-9a7e-46c3-81d1-7736cf53fcca 00:07:26.483 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.483 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.742 00:07:26.742 real 0m16.855s 00:07:26.742 user 0m44.027s 00:07:26.742 sys 0m4.052s 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.742 ************************************ 00:07:26.742 END TEST lvs_grow_dirty 00:07:26.742 ************************************ 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:26.742 nvmf_trace.0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.742 rmmod nvme_tcp 00:07:26.742 rmmod nvme_fabrics 00:07:26.742 rmmod nvme_keyring 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2591954 ']' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2591954 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2591954 ']' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2591954 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2591954 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2591954' 00:07:26.742 killing process with pid 2591954 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2591954 00:07:26.742 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2591954 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.001 16:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:28.982 00:07:28.982 real 0m42.535s 00:07:28.982 user 1m5.722s 00:07:28.982 sys 0m10.588s 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 ************************************ 00:07:28.982 END TEST nvmf_lvs_grow 00:07:28.982 ************************************ 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.982 16:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.241 ************************************ 00:07:29.241 START TEST nvmf_bdev_io_wait 00:07:29.241 ************************************ 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:29.241 * Looking for test storage... 00:07:29.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.241 16:00:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.241 --rc genhtml_branch_coverage=1 00:07:29.241 --rc genhtml_function_coverage=1 00:07:29.241 --rc genhtml_legend=1 00:07:29.241 --rc geninfo_all_blocks=1 00:07:29.241 --rc geninfo_unexecuted_blocks=1 00:07:29.241 00:07:29.241 ' 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.241 --rc genhtml_branch_coverage=1 00:07:29.241 --rc genhtml_function_coverage=1 00:07:29.241 --rc genhtml_legend=1 00:07:29.241 --rc geninfo_all_blocks=1 00:07:29.241 --rc geninfo_unexecuted_blocks=1 00:07:29.241 00:07:29.241 ' 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.241 --rc genhtml_branch_coverage=1 00:07:29.241 --rc genhtml_function_coverage=1 00:07:29.241 --rc genhtml_legend=1 00:07:29.241 --rc geninfo_all_blocks=1 00:07:29.241 --rc geninfo_unexecuted_blocks=1 00:07:29.241 00:07:29.241 ' 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.241 --rc genhtml_branch_coverage=1 00:07:29.241 --rc genhtml_function_coverage=1 00:07:29.241 --rc genhtml_legend=1 00:07:29.241 --rc geninfo_all_blocks=1 00:07:29.241 --rc geninfo_unexecuted_blocks=1 00:07:29.241 00:07:29.241 ' 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.241 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.242 16:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.811 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.811 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.812 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.812 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.812 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.812 16:00:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:07:35.812 00:07:35.812 --- 10.0.0.2 ping statistics --- 00:07:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.812 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:07:35.812 00:07:35.812 --- 10.0.0.1 ping statistics --- 00:07:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.812 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2596238 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2596238 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2596238 ']' 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.812 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.812 [2024-11-20 16:00:36.126026] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:35.812 [2024-11-20 16:00:36.126068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.812 [2024-11-20 16:00:36.210575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.812 [2024-11-20 16:00:36.254726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.812 [2024-11-20 16:00:36.254763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.812 [2024-11-20 16:00:36.254771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.812 [2024-11-20 16:00:36.254776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.812 [2024-11-20 16:00:36.254782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.812 [2024-11-20 16:00:36.256380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.812 [2024-11-20 16:00:36.256476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.812 [2024-11-20 16:00:36.256500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.812 [2024-11-20 16:00:36.256501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 [2024-11-20 16:00:37.071054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 Malloc0 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.379 [2024-11-20 16:00:37.114594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2596299 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2596302 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.379 { 00:07:36.379 "params": { 00:07:36.379 "name": "Nvme$subsystem", 00:07:36.379 "trtype": "$TEST_TRANSPORT", 00:07:36.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.379 "adrfam": "ipv4", 00:07:36.379 "trsvcid": "$NVMF_PORT", 00:07:36.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.379 "hdgst": ${hdgst:-false}, 00:07:36.379 "ddgst": ${ddgst:-false} 00:07:36.379 }, 00:07:36.379 "method": "bdev_nvme_attach_controller" 00:07:36.379 } 00:07:36.379 EOF 00:07:36.379 )") 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2596305 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.379 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2596309 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.380 { 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme$subsystem", 00:07:36.380 "trtype": "$TEST_TRANSPORT", 00:07:36.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "$NVMF_PORT", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.380 "hdgst": ${hdgst:-false}, 00:07:36.380 "ddgst": ${ddgst:-false} 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 } 00:07:36.380 EOF 00:07:36.380 )") 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.380 { 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme$subsystem", 00:07:36.380 "trtype": "$TEST_TRANSPORT", 00:07:36.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "$NVMF_PORT", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.380 "hdgst": ${hdgst:-false}, 00:07:36.380 "ddgst": ${ddgst:-false} 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 } 00:07:36.380 EOF 00:07:36.380 )") 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.380 { 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme$subsystem", 00:07:36.380 "trtype": "$TEST_TRANSPORT", 00:07:36.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "$NVMF_PORT", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.380 "hdgst": ${hdgst:-false}, 00:07:36.380 "ddgst": ${ddgst:-false} 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 } 00:07:36.380 EOF 00:07:36.380 )") 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2596299 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme1", 00:07:36.380 "trtype": "tcp", 00:07:36.380 "traddr": "10.0.0.2", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "4420", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.380 "hdgst": false, 00:07:36.380 "ddgst": false 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 }' 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme1", 00:07:36.380 "trtype": "tcp", 00:07:36.380 "traddr": "10.0.0.2", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "4420", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.380 "hdgst": false, 00:07:36.380 "ddgst": false 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 }' 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme1", 00:07:36.380 "trtype": "tcp", 00:07:36.380 "traddr": "10.0.0.2", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "4420", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.380 "hdgst": false, 00:07:36.380 "ddgst": false 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 }' 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:36.380 16:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.380 "params": { 00:07:36.380 "name": "Nvme1", 00:07:36.380 "trtype": "tcp", 00:07:36.380 "traddr": "10.0.0.2", 00:07:36.380 "adrfam": "ipv4", 00:07:36.380 "trsvcid": "4420", 00:07:36.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.380 "hdgst": false, 00:07:36.380 "ddgst": false 00:07:36.380 }, 00:07:36.380 "method": "bdev_nvme_attach_controller" 00:07:36.380 }' 00:07:36.380 [2024-11-20 16:00:37.164166] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:36.380 [2024-11-20 16:00:37.164215] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:36.380 [2024-11-20 16:00:37.169224] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:36.380 [2024-11-20 16:00:37.169270] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:36.381 [2024-11-20 16:00:37.169630] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:36.381 [2024-11-20 16:00:37.169672] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:36.381 [2024-11-20 16:00:37.169841] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:36.381 [2024-11-20 16:00:37.169883] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:36.639 [2024-11-20 16:00:37.363615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.639 [2024-11-20 16:00:37.406669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.639 [2024-11-20 16:00:37.472882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.897 [2024-11-20 16:00:37.522924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:36.897 [2024-11-20 16:00:37.523041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.897 [2024-11-20 16:00:37.564793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.897 [2024-11-20 16:00:37.565873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:36.897 [2024-11-20 16:00:37.607783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:36.897 Running I/O for 1 seconds... 00:07:36.897 Running I/O for 1 seconds... 00:07:36.897 Running I/O for 1 seconds... 00:07:37.154 Running I/O for 1 seconds... 00:07:38.089 237224.00 IOPS, 926.66 MiB/s 00:07:38.089 Latency(us) 00:07:38.089 [2024-11-20T15:00:38.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.089 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:38.089 Nvme1n1 : 1.00 236853.98 925.21 0.00 0.00 537.14 231.51 1538.67 00:07:38.089 [2024-11-20T15:00:38.926Z] =================================================================================================================== 00:07:38.089 [2024-11-20T15:00:38.926Z] Total : 236853.98 925.21 0.00 0.00 537.14 231.51 1538.67 00:07:38.089 11634.00 IOPS, 45.45 MiB/s 00:07:38.089 Latency(us) 00:07:38.089 [2024-11-20T15:00:38.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.089 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:38.089 Nvme1n1 : 1.01 11679.72 45.62 0.00 0.00 10918.78 6211.67 16754.42 00:07:38.089 [2024-11-20T15:00:38.926Z] =================================================================================================================== 00:07:38.089 [2024-11-20T15:00:38.926Z] Total : 11679.72 45.62 0.00 0.00 10918.78 6211.67 16754.42 00:07:38.089 10146.00 IOPS, 39.63 MiB/s 00:07:38.089 Latency(us) 00:07:38.089 [2024-11-20T15:00:38.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.089 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:38.089 Nvme1n1 : 1.01 10217.24 39.91 0.00 0.00 12490.66 4729.99 21769.35 00:07:38.089 [2024-11-20T15:00:38.926Z] =================================================================================================================== 00:07:38.089 [2024-11-20T15:00:38.926Z] Total : 10217.24 39.91 0.00 0.00 12490.66 4729.99 21769.35 00:07:38.089 10898.00 IOPS, 42.57 MiB/s 00:07:38.089 Latency(us) 00:07:38.089 [2024-11-20T15:00:38.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.089 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:38.089 Nvme1n1 : 1.01 10981.81 42.90 0.00 0.00 11625.66 3447.76 24276.81 00:07:38.089 [2024-11-20T15:00:38.926Z] =================================================================================================================== 00:07:38.089 [2024-11-20T15:00:38.926Z] Total : 10981.81 42.90 0.00 0.00 11625.66 3447.76 24276.81 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2596302 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2596305 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2596309 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.089 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.089 rmmod nvme_tcp 00:07:38.348 rmmod nvme_fabrics 00:07:38.348 rmmod nvme_keyring 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2596238 ']' 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2596238 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2596238 ']' 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2596238 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.348 16:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596238 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596238' 00:07:38.348 killing process with pid 2596238 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2596238 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2596238 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.348 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.606 16:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.512 00:07:40.512 real 0m11.425s 00:07:40.512 user 0m18.314s 00:07:40.512 sys 0m6.365s 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 END TEST nvmf_bdev_io_wait 00:07:40.512 ************************************ 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 START TEST nvmf_queue_depth 00:07:40.512 ************************************ 00:07:40.512 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:40.772 * Looking for test storage... 00:07:40.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.772 --rc genhtml_branch_coverage=1 00:07:40.772 --rc genhtml_function_coverage=1 00:07:40.772 --rc genhtml_legend=1 00:07:40.772 --rc geninfo_all_blocks=1 00:07:40.772 --rc geninfo_unexecuted_blocks=1 00:07:40.772 00:07:40.772 ' 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.772 --rc genhtml_branch_coverage=1 00:07:40.772 --rc genhtml_function_coverage=1 00:07:40.772 --rc genhtml_legend=1 00:07:40.772 --rc geninfo_all_blocks=1 00:07:40.772 --rc geninfo_unexecuted_blocks=1 00:07:40.772 00:07:40.772 ' 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.772 --rc genhtml_branch_coverage=1 00:07:40.772 --rc genhtml_function_coverage=1 00:07:40.772 --rc genhtml_legend=1 00:07:40.772 --rc geninfo_all_blocks=1 00:07:40.772 --rc geninfo_unexecuted_blocks=1 00:07:40.772 00:07:40.772 ' 00:07:40.772 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.772 --rc genhtml_branch_coverage=1 00:07:40.772 --rc genhtml_function_coverage=1 00:07:40.772 --rc genhtml_legend=1 00:07:40.772 --rc geninfo_all_blocks=1 00:07:40.772 --rc geninfo_unexecuted_blocks=1 00:07:40.772 00:07:40.772 ' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.773 16:00:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.344 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.344 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.344 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:47.345 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:47.345 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:47.345 Found net devices under 0000:86:00.0: cvl_0_0 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:47.345 Found net devices under 0000:86:00.1: cvl_0_1 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.345 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:07:47.346 00:07:47.346 --- 10.0.0.2 ping statistics --- 00:07:47.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.346 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:07:47.346 00:07:47.346 --- 10.0.0.1 ping statistics --- 00:07:47.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.346 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2600287 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2600287 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2600287 ']' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 [2024-11-20 16:00:47.590636] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:47.346 [2024-11-20 16:00:47.590686] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.346 [2024-11-20 16:00:47.673969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.346 [2024-11-20 16:00:47.713921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.346 [2024-11-20 16:00:47.713959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.346 [2024-11-20 16:00:47.713966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.346 [2024-11-20 16:00:47.713972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.346 [2024-11-20 16:00:47.713977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.346 [2024-11-20 16:00:47.714539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 [2024-11-20 16:00:47.859213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 Malloc0 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.346 [2024-11-20 16:00:47.909674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2600318 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2600318 /var/tmp/bdevperf.sock 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2600318 ']' 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.346 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.347 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.347 16:00:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.347 [2024-11-20 16:00:47.957953] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:47.347 [2024-11-20 16:00:47.957995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600318 ] 00:07:47.347 [2024-11-20 16:00:48.034632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.347 [2024-11-20 16:00:48.079200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.605 NVMe0n1 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.605 16:00:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:47.864 Running I/O for 10 seconds... 00:07:49.731 11594.00 IOPS, 45.29 MiB/s [2024-11-20T15:00:51.501Z] 11769.50 IOPS, 45.97 MiB/s [2024-11-20T15:00:52.876Z] 11918.00 IOPS, 46.55 MiB/s [2024-11-20T15:00:53.811Z] 12016.75 IOPS, 46.94 MiB/s [2024-11-20T15:00:54.746Z] 12044.20 IOPS, 47.05 MiB/s [2024-11-20T15:00:55.681Z] 12010.83 IOPS, 46.92 MiB/s [2024-11-20T15:00:56.617Z] 12040.71 IOPS, 47.03 MiB/s [2024-11-20T15:00:57.551Z] 12072.25 IOPS, 47.16 MiB/s [2024-11-20T15:00:58.926Z] 12113.33 IOPS, 47.32 MiB/s [2024-11-20T15:00:58.926Z] 12118.80 IOPS, 47.34 MiB/s 00:07:58.089 Latency(us) 00:07:58.089 [2024-11-20T15:00:58.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.089 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:58.089 Verification LBA range: start 0x0 length 0x4000 00:07:58.089 NVMe0n1 : 10.06 12151.07 47.47 0.00 0.00 83951.30 17780.20 53112.65 00:07:58.089 [2024-11-20T15:00:58.926Z] =================================================================================================================== 00:07:58.089 [2024-11-20T15:00:58.926Z] Total : 12151.07 47.47 0.00 0.00 83951.30 17780.20 53112.65 00:07:58.089 { 00:07:58.089 "results": [ 00:07:58.089 { 00:07:58.089 "job": "NVMe0n1", 00:07:58.089 "core_mask": "0x1", 00:07:58.089 "workload": "verify", 00:07:58.089 "status": "finished", 00:07:58.089 "verify_range": { 00:07:58.089 "start": 0, 00:07:58.089 "length": 16384 00:07:58.089 }, 00:07:58.089 "queue_depth": 1024, 00:07:58.089 "io_size": 4096, 00:07:58.089 "runtime": 10.057711, 00:07:58.089 "iops": 12151.074931463034, 00:07:58.089 "mibps": 47.465136451027476, 00:07:58.089 "io_failed": 0, 00:07:58.089 "io_timeout": 0, 00:07:58.089 "avg_latency_us": 83951.30413653253, 00:07:58.089 "min_latency_us": 17780.201739130436, 00:07:58.089 "max_latency_us": 53112.653913043476 00:07:58.089 } 00:07:58.089 ], 00:07:58.089 "core_count": 1 00:07:58.089 } 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2600318 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2600318 ']' 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2600318 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600318 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600318' 00:07:58.089 killing process with pid 2600318 00:07:58.089 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2600318 00:07:58.089 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.089 00:07:58.089 Latency(us) 00:07:58.089 [2024-11-20T15:00:58.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.090 [2024-11-20T15:00:58.927Z] =================================================================================================================== 00:07:58.090 [2024-11-20T15:00:58.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2600318 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.090 rmmod nvme_tcp 00:07:58.090 rmmod nvme_fabrics 00:07:58.090 rmmod nvme_keyring 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2600287 ']' 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2600287 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2600287 ']' 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2600287 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600287 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600287' 00:07:58.090 killing process with pid 2600287 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2600287 00:07:58.090 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2600287 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.349 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.885 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.885 00:08:00.885 real 0m19.816s 00:08:00.885 user 0m23.289s 00:08:00.885 sys 0m6.031s 00:08:00.885 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.885 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.886 ************************************ 00:08:00.886 END TEST nvmf_queue_depth 00:08:00.886 ************************************ 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.886 ************************************ 00:08:00.886 START TEST nvmf_target_multipath 00:08:00.886 ************************************ 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:00.886 * Looking for test storage... 00:08:00.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.886 --rc genhtml_branch_coverage=1 00:08:00.886 --rc genhtml_function_coverage=1 00:08:00.886 --rc genhtml_legend=1 00:08:00.886 --rc geninfo_all_blocks=1 00:08:00.886 --rc geninfo_unexecuted_blocks=1 00:08:00.886 00:08:00.886 ' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.886 --rc genhtml_branch_coverage=1 00:08:00.886 --rc genhtml_function_coverage=1 00:08:00.886 --rc genhtml_legend=1 00:08:00.886 --rc geninfo_all_blocks=1 00:08:00.886 --rc geninfo_unexecuted_blocks=1 00:08:00.886 00:08:00.886 ' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.886 --rc genhtml_branch_coverage=1 00:08:00.886 --rc genhtml_function_coverage=1 00:08:00.886 --rc genhtml_legend=1 00:08:00.886 --rc geninfo_all_blocks=1 00:08:00.886 --rc geninfo_unexecuted_blocks=1 00:08:00.886 00:08:00.886 ' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.886 --rc genhtml_branch_coverage=1 00:08:00.886 --rc genhtml_function_coverage=1 00:08:00.886 --rc genhtml_legend=1 00:08:00.886 --rc geninfo_all_blocks=1 00:08:00.886 --rc geninfo_unexecuted_blocks=1 00:08:00.886 00:08:00.886 ' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.886 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.887 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.460 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:07.461 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:07.461 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:07.461 Found net devices under 0000:86:00.0: cvl_0_0 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:07.461 Found net devices under 0000:86:00.1: cvl_0_1 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:07.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:08:07.461 00:08:07.461 --- 10.0.0.2 ping statistics --- 00:08:07.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.461 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:07.461 00:08:07.461 --- 10.0.0.1 ping statistics --- 00:08:07.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.461 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.461 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:07.462 only one NIC for nvmf test 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.462 rmmod nvme_tcp 00:08:07.462 rmmod nvme_fabrics 00:08:07.462 rmmod nvme_keyring 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.462 16:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.842 00:08:08.842 real 0m8.391s 00:08:08.842 user 0m1.833s 00:08:08.842 sys 0m4.590s 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:08.842 ************************************ 00:08:08.842 END TEST nvmf_target_multipath 00:08:08.842 ************************************ 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.842 16:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 ************************************ 00:08:09.102 START TEST nvmf_zcopy 00:08:09.102 ************************************ 00:08:09.102 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:09.102 * Looking for test storage... 00:08:09.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.102 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.102 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.102 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.103 --rc genhtml_branch_coverage=1 00:08:09.103 --rc genhtml_function_coverage=1 00:08:09.103 --rc genhtml_legend=1 00:08:09.103 --rc geninfo_all_blocks=1 00:08:09.103 --rc geninfo_unexecuted_blocks=1 00:08:09.103 00:08:09.103 ' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.103 --rc genhtml_branch_coverage=1 00:08:09.103 --rc genhtml_function_coverage=1 00:08:09.103 --rc genhtml_legend=1 00:08:09.103 --rc geninfo_all_blocks=1 00:08:09.103 --rc geninfo_unexecuted_blocks=1 00:08:09.103 00:08:09.103 ' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.103 --rc genhtml_branch_coverage=1 00:08:09.103 --rc genhtml_function_coverage=1 00:08:09.103 --rc genhtml_legend=1 00:08:09.103 --rc geninfo_all_blocks=1 00:08:09.103 --rc geninfo_unexecuted_blocks=1 00:08:09.103 00:08:09.103 ' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.103 --rc genhtml_branch_coverage=1 00:08:09.103 --rc genhtml_function_coverage=1 00:08:09.103 --rc genhtml_legend=1 00:08:09.103 --rc geninfo_all_blocks=1 00:08:09.103 --rc geninfo_unexecuted_blocks=1 00:08:09.103 00:08:09.103 ' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.103 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.104 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.104 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.104 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.104 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.104 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.104 16:01:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.675 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:15.676 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:15.676 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:15.676 Found net devices under 0000:86:00.0: cvl_0_0 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:15.676 Found net devices under 0000:86:00.1: cvl_0_1 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:08:15.676 00:08:15.676 --- 10.0.0.2 ping statistics --- 00:08:15.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.676 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:15.676 00:08:15.676 --- 10.0.0.1 ping statistics --- 00:08:15.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.676 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.676 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2609208 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2609208 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2609208 ']' 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.677 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 [2024-11-20 16:01:15.918559] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:15.677 [2024-11-20 16:01:15.918611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.677 [2024-11-20 16:01:16.000907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.677 [2024-11-20 16:01:16.042686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.677 [2024-11-20 16:01:16.042720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.677 [2024-11-20 16:01:16.042728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.677 [2024-11-20 16:01:16.042738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.677 [2024-11-20 16:01:16.042745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.677 [2024-11-20 16:01:16.043297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 [2024-11-20 16:01:16.188233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 [2024-11-20 16:01:16.208434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 malloc0 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.677 { 00:08:15.677 "params": { 00:08:15.677 "name": "Nvme$subsystem", 00:08:15.677 "trtype": "$TEST_TRANSPORT", 00:08:15.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.677 "adrfam": "ipv4", 00:08:15.677 "trsvcid": "$NVMF_PORT", 00:08:15.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.677 "hdgst": ${hdgst:-false}, 00:08:15.677 "ddgst": ${ddgst:-false} 00:08:15.677 }, 00:08:15.677 "method": "bdev_nvme_attach_controller" 00:08:15.677 } 00:08:15.677 EOF 00:08:15.677 )") 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:15.677 16:01:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.677 "params": { 00:08:15.677 "name": "Nvme1", 00:08:15.677 "trtype": "tcp", 00:08:15.677 "traddr": "10.0.0.2", 00:08:15.677 "adrfam": "ipv4", 00:08:15.677 "trsvcid": "4420", 00:08:15.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.677 "hdgst": false, 00:08:15.677 "ddgst": false 00:08:15.677 }, 00:08:15.677 "method": "bdev_nvme_attach_controller" 00:08:15.677 }' 00:08:15.677 [2024-11-20 16:01:16.292954] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:15.677 [2024-11-20 16:01:16.293002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609264 ] 00:08:15.677 [2024-11-20 16:01:16.368572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.677 [2024-11-20 16:01:16.409993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.936 Running I/O for 10 seconds... 00:08:17.808 8469.00 IOPS, 66.16 MiB/s [2024-11-20T15:01:19.696Z] 8557.00 IOPS, 66.85 MiB/s [2024-11-20T15:01:20.631Z] 8560.00 IOPS, 66.88 MiB/s [2024-11-20T15:01:22.009Z] 8548.00 IOPS, 66.78 MiB/s [2024-11-20T15:01:22.944Z] 8550.00 IOPS, 66.80 MiB/s [2024-11-20T15:01:23.881Z] 8560.67 IOPS, 66.88 MiB/s [2024-11-20T15:01:24.816Z] 8575.71 IOPS, 67.00 MiB/s [2024-11-20T15:01:25.751Z] 8578.12 IOPS, 67.02 MiB/s [2024-11-20T15:01:26.688Z] 8581.67 IOPS, 67.04 MiB/s [2024-11-20T15:01:26.946Z] 8581.20 IOPS, 67.04 MiB/s 00:08:26.109 Latency(us) 00:08:26.109 [2024-11-20T15:01:26.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.110 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:26.110 Verification LBA range: start 0x0 length 0x1000 00:08:26.110 Nvme1n1 : 10.05 8548.19 66.78 0.00 0.00 14877.35 1923.34 44222.55 00:08:26.110 [2024-11-20T15:01:26.947Z] =================================================================================================================== 00:08:26.110 [2024-11-20T15:01:26.947Z] Total : 8548.19 66.78 0.00 0.00 14877.35 1923.34 44222.55 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2611074 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.110 { 00:08:26.110 "params": { 00:08:26.110 "name": "Nvme$subsystem", 00:08:26.110 "trtype": "$TEST_TRANSPORT", 00:08:26.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.110 "adrfam": "ipv4", 00:08:26.110 "trsvcid": "$NVMF_PORT", 00:08:26.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.110 "hdgst": ${hdgst:-false}, 00:08:26.110 "ddgst": ${ddgst:-false} 00:08:26.110 }, 00:08:26.110 "method": "bdev_nvme_attach_controller" 00:08:26.110 } 00:08:26.110 EOF 00:08:26.110 )") 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:26.110 [2024-11-20 16:01:26.855052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.855085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:26.110 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.110 "params": { 00:08:26.110 "name": "Nvme1", 00:08:26.110 "trtype": "tcp", 00:08:26.110 "traddr": "10.0.0.2", 00:08:26.110 "adrfam": "ipv4", 00:08:26.110 "trsvcid": "4420", 00:08:26.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.110 "hdgst": false, 00:08:26.110 "ddgst": false 00:08:26.110 }, 00:08:26.110 "method": "bdev_nvme_attach_controller" 00:08:26.110 }' 00:08:26.110 [2024-11-20 16:01:26.867050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.867062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 [2024-11-20 16:01:26.879075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.879086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 [2024-11-20 16:01:26.891105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.891115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 [2024-11-20 16:01:26.895963] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:26.110 [2024-11-20 16:01:26.896012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2611074 ] 00:08:26.110 [2024-11-20 16:01:26.903142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.903156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 [2024-11-20 16:01:26.915167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.915178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 [2024-11-20 16:01:26.927202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.927211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.110 [2024-11-20 16:01:26.939232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.110 [2024-11-20 16:01:26.939242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:26.951272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:26.951287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:26.963292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:26.963301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:26.971663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.369 [2024-11-20 16:01:26.975336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:26.975346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:26.987357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:26.987371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:26.999392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:26.999403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.011423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.011436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.012749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.369 [2024-11-20 16:01:27.023479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.023493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.035498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.035517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.047525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.047539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.059555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.059569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.071588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.071601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.083619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.083632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.095650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.095659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.107701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.107722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.119726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.119741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.131758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.131771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.143802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.369 [2024-11-20 16:01:27.143815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.369 [2024-11-20 16:01:27.155816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.370 [2024-11-20 16:01:27.155826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.370 [2024-11-20 16:01:27.167857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.370 [2024-11-20 16:01:27.167870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.370 [2024-11-20 16:01:27.179892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.370 [2024-11-20 16:01:27.179906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.370 [2024-11-20 16:01:27.191921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.370 [2024-11-20 16:01:27.191931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.370 [2024-11-20 16:01:27.203959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.370 [2024-11-20 16:01:27.203969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.215986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.216000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.228032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.228046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.240062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.240072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.252099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.252112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.264131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.264142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.276163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.276176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.288195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.288206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.300230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.300242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.312261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.312273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.324302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.324321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 Running I/O for 5 seconds... 00:08:26.629 [2024-11-20 16:01:27.336329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.336340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.352155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.352176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.366424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.366444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.380657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.380677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.394990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.395009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.409296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.409315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.423209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.423228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.437405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.437425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.629 [2024-11-20 16:01:27.451155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.629 [2024-11-20 16:01:27.451174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.465338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.465362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.479606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.479625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.493605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.493628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.507637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.507657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.521158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.521177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.535364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.535384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.545885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.545904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.560487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.560506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.574595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.574615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.588150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.588170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.601923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.601944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.615936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.615962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.629986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.630006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.643895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.643915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.657946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.657971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.671770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.671800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.685779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.685798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.699957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.699975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.888 [2024-11-20 16:01:27.713893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.888 [2024-11-20 16:01:27.713911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.147 [2024-11-20 16:01:27.728427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.728452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.744033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.744052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.758052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.758071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.771978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.771996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.786174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.786193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.800062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.800081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.814058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.814077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.827747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.827767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.841768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.841788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.855439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.855457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.869391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.869411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.883384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.883403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.897521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.897541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.911274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.911292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.925454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.925473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.939397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.939415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.953187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.953206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.966978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.966997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.148 [2024-11-20 16:01:27.980723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.148 [2024-11-20 16:01:27.980742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:27.995095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:27.995115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.009014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.009033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.022672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.022691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.036568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.036586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.050759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.050777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.061797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.061817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.071401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.071422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.080920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.080939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.095475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.095495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.109310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.109329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.120441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.120459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.129902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.129921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.144866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.144885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.407 [2024-11-20 16:01:28.155685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.407 [2024-11-20 16:01:28.155703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.408 [2024-11-20 16:01:28.169873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.408 [2024-11-20 16:01:28.169891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.408 [2024-11-20 16:01:28.183808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.408 [2024-11-20 16:01:28.183827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.408 [2024-11-20 16:01:28.197761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.408 [2024-11-20 16:01:28.197780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.408 [2024-11-20 16:01:28.211501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.408 [2024-11-20 16:01:28.211524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.408 [2024-11-20 16:01:28.225475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.408 [2024-11-20 16:01:28.225494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.408 [2024-11-20 16:01:28.239674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.408 [2024-11-20 16:01:28.239693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.254035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.254055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.268111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.268129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.282365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.282385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.297735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.297755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.311671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.311690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.325641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.325660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 16593.00 IOPS, 129.63 MiB/s [2024-11-20T15:01:28.504Z] [2024-11-20 16:01:28.336870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.336889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.346283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.346301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.360371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.360395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.373408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.373427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.387447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.387466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.401453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.401471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.415522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.415542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.426072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.426090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.440199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.440217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.454089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.454108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.467807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.467826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.481810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.481833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.667 [2024-11-20 16:01:28.495775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.667 [2024-11-20 16:01:28.495794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.509787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.509805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.523652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.523670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.537571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.537590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.551593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.551611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.565636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.565655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.579558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.579577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.589197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.589218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.599214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.599234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.614159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.614178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.627530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.627549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.641451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.641472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.655284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.655304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.669271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.669291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.683119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.683140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.697364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.697384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.708780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.708799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.723209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.723229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.737245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.737267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.927 [2024-11-20 16:01:28.747934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.927 [2024-11-20 16:01:28.747959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.762180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.762199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.776455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.776474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.791040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.791059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.805021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.805040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.819049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.819068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.833075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.833095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.847277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.847297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.861639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.861659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.872500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.872519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.881900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.881920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.896489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.896509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.910068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.910098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.924736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.924760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.939775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.939796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.954434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.954453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.969673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.969693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.983628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.983647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:28.997810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:28.997834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.187 [2024-11-20 16:01:29.008783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.187 [2024-11-20 16:01:29.008803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.023356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.023377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.036769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.036787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.050844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.050864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.064790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.064809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.078436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.078456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.092556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.092574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.103057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.103076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.117592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.117611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.131170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.131189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.145481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.145499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.156415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.156434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.170823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.170842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.184933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.184958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.198811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.198831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.212756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.212774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.226809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.226828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.240729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.240748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.254416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.254439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.446 [2024-11-20 16:01:29.268116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.446 [2024-11-20 16:01:29.268134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.282279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.282298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.296019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.296038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.310009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.310028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.323853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.323871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 16625.00 IOPS, 129.88 MiB/s [2024-11-20T15:01:29.542Z] [2024-11-20 16:01:29.338068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.338087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.351742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.351761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.365590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.365609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.379713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.379732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.393873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.393892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.408029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.408054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.422046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.422066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.436045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.436064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.450455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.450473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.465996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.466014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.480407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.480426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.494503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.494523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.508349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.508368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.522272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.522291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.705 [2024-11-20 16:01:29.536624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.705 [2024-11-20 16:01:29.536643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.547310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.547329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.561619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.561637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.575548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.575566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.586515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.586533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.601259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.601277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.614652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.614670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.628969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.628988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.642849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.642868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.656707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.656726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.670584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.964 [2024-11-20 16:01:29.670603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.964 [2024-11-20 16:01:29.684562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.684581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.698516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.698534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.712041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.712060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.726353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.726373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.740230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.740249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.754388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.754406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.768201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.768221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.781886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.781904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.965 [2024-11-20 16:01:29.795508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.965 [2024-11-20 16:01:29.795527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.223 [2024-11-20 16:01:29.809877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.223 [2024-11-20 16:01:29.809896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.223 [2024-11-20 16:01:29.820839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.223 [2024-11-20 16:01:29.820857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.223 [2024-11-20 16:01:29.835512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.223 [2024-11-20 16:01:29.835536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.849111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.849130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.863435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.863456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.877096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.877117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.891229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.891249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.900258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.900277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.914702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.914721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.928606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.928625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.943140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.943159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.958377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.958396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.972556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.972575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:29.986412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:29.986432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:30.000389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:30.000408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:30.014523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:30.014542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:30.028914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:30.028936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:30.040106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:30.040127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.224 [2024-11-20 16:01:30.055899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.224 [2024-11-20 16:01:30.055922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.071117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.071137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.085201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.085221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.099286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.099316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.114262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.114280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.129545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.129565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.143613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.143632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.154275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.154294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.169012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.169032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.182605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.182625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.197149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.197168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.212457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.212477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.226540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.226560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.240759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.240779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.255002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.255022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.268884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.268904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.279481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.279500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.293818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.293843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.483 [2024-11-20 16:01:30.308024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.483 [2024-11-20 16:01:30.308044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-11-20 16:01:30.319162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-11-20 16:01:30.319181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-11-20 16:01:30.333363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-11-20 16:01:30.333382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 16611.33 IOPS, 129.78 MiB/s [2024-11-20T15:01:30.579Z] [2024-11-20 16:01:30.347519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-11-20 16:01:30.347539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-11-20 16:01:30.358331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-11-20 16:01:30.358351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-11-20 16:01:30.372724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-11-20 16:01:30.372744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.386350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.386370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.400877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.400895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.416080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.416100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.430265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.430284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.444766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.444786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.455923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.455942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.470401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.470421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.484261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.484281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.498268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.498287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.512346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.512365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.526596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.526615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.537787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.537806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.552313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.552336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.743 [2024-11-20 16:01:30.565867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.743 [2024-11-20 16:01:30.565886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.580281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.580300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.594433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.594452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.608471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.608490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.622830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.622849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.633787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.633806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.648023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.648041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.662415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.662440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.673629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.673647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.688078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.688097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.701660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.701679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.716203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.716222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.727130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.727149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.741834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.741854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.755967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.755985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.770140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.770160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.784245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.784265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.798713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.798732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.809900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.809923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.002 [2024-11-20 16:01:30.824221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.002 [2024-11-20 16:01:30.824240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.838150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.838169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.851836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.851856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.866171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.866191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.880071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.880090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.894350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.894370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.908710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.908730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.922942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.922966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.937366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.937385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.951327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.951346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.962022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.962041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.971725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.971744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.986627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.986645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:30.997928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:30.997953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.007642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.007660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.022457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.022477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.033190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.033209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.047643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.047661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.061468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.061487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.075682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.075701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.261 [2024-11-20 16:01:31.089640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.261 [2024-11-20 16:01:31.089659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.103611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.103630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.117864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.117884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.129056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.129076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.138585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.138603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.153480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.153498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.164660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.164678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.179321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.179339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.193266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.193286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.207650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.207670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.221691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.221710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.236059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.236079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.247014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.247033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.261415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.261433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.275220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.275239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.289441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.289460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.303153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.303172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.317600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.317618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 [2024-11-20 16:01:31.331715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.331733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.519 16601.25 IOPS, 129.70 MiB/s [2024-11-20T15:01:31.356Z] [2024-11-20 16:01:31.342049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.519 [2024-11-20 16:01:31.342068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.356445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.356464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.370831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.370850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.384907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.384926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.398869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.398888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.413365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.413386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.424437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.424457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.439714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.439739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.455046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.455068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.469276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.469297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.483039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.483059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.497406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.497427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.508444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.508463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.522896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.522917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.536817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.536838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.550377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.550398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.564267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.564292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.578555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.578575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.589341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.589361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.776 [2024-11-20 16:01:31.604020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.776 [2024-11-20 16:01:31.604040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.617686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.617707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.632340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.632360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.647455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.647474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.662098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.662117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.676079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.676100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.689871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.689891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.704528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.704547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.719774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.719794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.734355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.734374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.745046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.745065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.759353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.759372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.773290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.773309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.784333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.784352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.799123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.799144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.812640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.812659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.827155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.827183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.841681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.841700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.852761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.852780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.034 [2024-11-20 16:01:31.867501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.034 [2024-11-20 16:01:31.867520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.881650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.881670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.891155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.891173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.905514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.905532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.919709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.919729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.933560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.933580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.947954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.947990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.958860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.958879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.973034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.973053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:31.986881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:31.986900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.001186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.001205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.015018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.015037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.029165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.029184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.043362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.043382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.054887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.054907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.069030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.069048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.083114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.083138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.096890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.096909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.111408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.111427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.292 [2024-11-20 16:01:32.122366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.292 [2024-11-20 16:01:32.122384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.136988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.137008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.147590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.147609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.161972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.161991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.175925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.175944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.190667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.190685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.205967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.205986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.219903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.219921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.234222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.234241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.248451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.248471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.263241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.263259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.278715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.278734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.293253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.293272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.302003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.302022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.316466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.316485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.330191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.330210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 16591.80 IOPS, 129.62 MiB/s [2024-11-20T15:01:32.388Z] [2024-11-20 16:01:32.344456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.344476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 00:08:31.551 Latency(us) 00:08:31.551 [2024-11-20T15:01:32.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.551 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:31.551 Nvme1n1 : 5.01 16594.63 129.65 0.00 0.00 7706.07 3632.97 17894.18 00:08:31.551 [2024-11-20T15:01:32.388Z] =================================================================================================================== 00:08:31.551 [2024-11-20T15:01:32.388Z] Total : 16594.63 129.65 0.00 0.00 7706.07 3632.97 17894.18 00:08:31.551 [2024-11-20 16:01:32.352561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.352578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.364606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.364622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.551 [2024-11-20 16:01:32.376646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.551 [2024-11-20 16:01:32.376662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.388676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.388694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.400702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.400717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.412736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.412750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.424765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.424780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.436797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.436811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.448830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.448842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.460861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.460874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.472893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.472903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.484929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.484942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.496959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.810 [2024-11-20 16:01:32.496970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.810 [2024-11-20 16:01:32.508991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.811 [2024-11-20 16:01:32.509002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2611074) - No such process 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2611074 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.811 delay0 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.811 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:32.069 [2024-11-20 16:01:32.661598] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:38.627 Initializing NVMe Controllers 00:08:38.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:38.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:38.627 Initialization complete. Launching workers. 00:08:38.627 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3283 00:08:38.627 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3556, failed to submit 47 00:08:38.627 success 3368, unsuccessful 188, failed 0 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.628 rmmod nvme_tcp 00:08:38.628 rmmod nvme_fabrics 00:08:38.628 rmmod nvme_keyring 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2609208 ']' 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2609208 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2609208 ']' 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2609208 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2609208 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2609208' 00:08:38.628 killing process with pid 2609208 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2609208 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2609208 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.628 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.886 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.886 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.886 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.886 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.886 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.792 00:08:40.792 real 0m31.853s 00:08:40.792 user 0m42.768s 00:08:40.792 sys 0m11.183s 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.792 ************************************ 00:08:40.792 END TEST nvmf_zcopy 00:08:40.792 ************************************ 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.792 ************************************ 00:08:40.792 START TEST nvmf_nmic 00:08:40.792 ************************************ 00:08:40.792 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:41.052 * Looking for test storage... 00:08:41.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.052 --rc genhtml_branch_coverage=1 00:08:41.052 --rc genhtml_function_coverage=1 00:08:41.052 --rc genhtml_legend=1 00:08:41.052 --rc geninfo_all_blocks=1 00:08:41.052 --rc geninfo_unexecuted_blocks=1 00:08:41.052 00:08:41.052 ' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.052 --rc genhtml_branch_coverage=1 00:08:41.052 --rc genhtml_function_coverage=1 00:08:41.052 --rc genhtml_legend=1 00:08:41.052 --rc geninfo_all_blocks=1 00:08:41.052 --rc geninfo_unexecuted_blocks=1 00:08:41.052 00:08:41.052 ' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.052 --rc genhtml_branch_coverage=1 00:08:41.052 --rc genhtml_function_coverage=1 00:08:41.052 --rc genhtml_legend=1 00:08:41.052 --rc geninfo_all_blocks=1 00:08:41.052 --rc geninfo_unexecuted_blocks=1 00:08:41.052 00:08:41.052 ' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.052 --rc genhtml_branch_coverage=1 00:08:41.052 --rc genhtml_function_coverage=1 00:08:41.052 --rc genhtml_legend=1 00:08:41.052 --rc geninfo_all_blocks=1 00:08:41.052 --rc geninfo_unexecuted_blocks=1 00:08:41.052 00:08:41.052 ' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.052 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.053 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:47.617 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:47.617 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:47.617 Found net devices under 0000:86:00.0: cvl_0_0 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:47.617 Found net devices under 0000:86:00.1: cvl_0_1 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.617 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:08:47.618 00:08:47.618 --- 10.0.0.2 ping statistics --- 00:08:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.618 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:08:47.618 00:08:47.618 --- 10.0.0.1 ping statistics --- 00:08:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.618 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2616675 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2616675 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2616675 ']' 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.618 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 [2024-11-20 16:01:47.873493] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:47.618 [2024-11-20 16:01:47.873545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.618 [2024-11-20 16:01:47.952184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.618 [2024-11-20 16:01:47.996740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.618 [2024-11-20 16:01:47.996781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.618 [2024-11-20 16:01:47.996788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.618 [2024-11-20 16:01:47.996794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.618 [2024-11-20 16:01:47.996799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.618 [2024-11-20 16:01:47.998395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.618 [2024-11-20 16:01:47.998513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.618 [2024-11-20 16:01:47.998426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.618 [2024-11-20 16:01:47.998514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 [2024-11-20 16:01:48.137452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 Malloc0 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 [2024-11-20 16:01:48.206952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:47.618 test case1: single bdev can't be used in multiple subsystems 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.618 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.619 [2024-11-20 16:01:48.238855] bdev.c:8326:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:47.619 [2024-11-20 16:01:48.238875] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:47.619 [2024-11-20 16:01:48.238882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.619 request: 00:08:47.619 { 00:08:47.619 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:47.619 "namespace": { 00:08:47.619 "bdev_name": "Malloc0", 00:08:47.619 "no_auto_visible": false 00:08:47.619 }, 00:08:47.619 "method": "nvmf_subsystem_add_ns", 00:08:47.619 "req_id": 1 00:08:47.619 } 00:08:47.619 Got JSON-RPC error response 00:08:47.619 response: 00:08:47.619 { 00:08:47.619 "code": -32602, 00:08:47.619 "message": "Invalid parameters" 00:08:47.619 } 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:47.619 Adding namespace failed - expected result. 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:47.619 test case2: host connect to nvmf target in multiple paths 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.619 [2024-11-20 16:01:48.251012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.619 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.552 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:49.925 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.925 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:49.925 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.925 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:49.925 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:51.823 16:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:51.823 [global] 00:08:51.823 thread=1 00:08:51.823 invalidate=1 00:08:51.823 rw=write 00:08:51.823 time_based=1 00:08:51.823 runtime=1 00:08:51.823 ioengine=libaio 00:08:51.823 direct=1 00:08:51.823 bs=4096 00:08:51.823 iodepth=1 00:08:51.823 norandommap=0 00:08:51.823 numjobs=1 00:08:51.823 00:08:51.823 verify_dump=1 00:08:51.823 verify_backlog=512 00:08:51.823 verify_state_save=0 00:08:51.823 do_verify=1 00:08:51.823 verify=crc32c-intel 00:08:51.824 [job0] 00:08:51.824 filename=/dev/nvme0n1 00:08:51.824 Could not set queue depth (nvme0n1) 00:08:52.082 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.082 fio-3.35 00:08:52.082 Starting 1 thread 00:08:53.455 00:08:53.455 job0: (groupid=0, jobs=1): err= 0: pid=2617738: Wed Nov 20 16:01:53 2024 00:08:53.455 read: IOPS=2538, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:08:53.455 slat (nsec): min=6601, max=26784, avg=7526.55, stdev=972.35 00:08:53.455 clat (usec): min=153, max=41181, avg=193.98, stdev=810.33 00:08:53.455 lat (usec): min=161, max=41190, avg=201.51, stdev=810.36 00:08:53.455 clat percentiles (usec): 00:08:53.455 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 172], 00:08:53.455 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 178], 00:08:53.455 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 192], 00:08:53.455 | 99.00th=[ 235], 99.50th=[ 265], 99.90th=[ 273], 99.95th=[ 306], 00:08:53.455 | 99.99th=[41157] 00:08:53.455 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:08:53.455 slat (nsec): min=9193, max=40923, avg=10508.91, stdev=1324.51 00:08:53.455 clat (usec): min=114, max=353, avg=146.02, stdev=41.28 00:08:53.455 lat (usec): min=124, max=394, avg=156.53, stdev=41.64 00:08:53.455 clat percentiles (usec): 00:08:53.455 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 124], 00:08:53.455 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 130], 00:08:53.455 | 70.00th=[ 133], 80.00th=[ 153], 90.00th=[ 241], 95.00th=[ 243], 00:08:53.455 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 285], 00:08:53.455 | 99.99th=[ 355] 00:08:53.455 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=2 00:08:53.455 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:08:53.455 lat (usec) : 250=99.33%, 500=0.66% 00:08:53.455 lat (msec) : 50=0.02% 00:08:53.455 cpu : usr=3.27%, sys=4.66%, ctx=5633, majf=0, minf=1 00:08:53.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.455 issued rwts: total=2561,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.455 00:08:53.455 Run status group 0 (all jobs): 00:08:53.455 READ: bw=9.91MiB/s (10.4MB/s), 9.91MiB/s-9.91MiB/s (10.4MB/s-10.4MB/s), io=10.0MiB (10.5MB), run=1009-1009msec 00:08:53.455 WRITE: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=12.0MiB (12.6MB), run=1009-1009msec 00:08:53.455 00:08:53.455 Disk stats (read/write): 00:08:53.455 nvme0n1: ios=2610/2600, merge=0/0, ticks=474/371, in_queue=845, util=91.08% 00:08:53.455 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.455 rmmod nvme_tcp 00:08:53.455 rmmod nvme_fabrics 00:08:53.455 rmmod nvme_keyring 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2616675 ']' 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2616675 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2616675 ']' 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2616675 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2616675 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.455 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2616675' 00:08:53.456 killing process with pid 2616675 00:08:53.456 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2616675 00:08:53.456 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2616675 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.714 16:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.251 00:08:56.251 real 0m14.910s 00:08:56.251 user 0m32.702s 00:08:56.251 sys 0m5.373s 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:56.251 ************************************ 00:08:56.251 END TEST nvmf_nmic 00:08:56.251 ************************************ 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.251 ************************************ 00:08:56.251 START TEST nvmf_fio_target 00:08:56.251 ************************************ 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.251 * Looking for test storage... 00:08:56.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.251 --rc genhtml_branch_coverage=1 00:08:56.251 --rc genhtml_function_coverage=1 00:08:56.251 --rc genhtml_legend=1 00:08:56.251 --rc geninfo_all_blocks=1 00:08:56.251 --rc geninfo_unexecuted_blocks=1 00:08:56.251 00:08:56.251 ' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.251 --rc genhtml_branch_coverage=1 00:08:56.251 --rc genhtml_function_coverage=1 00:08:56.251 --rc genhtml_legend=1 00:08:56.251 --rc geninfo_all_blocks=1 00:08:56.251 --rc geninfo_unexecuted_blocks=1 00:08:56.251 00:08:56.251 ' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.251 --rc genhtml_branch_coverage=1 00:08:56.251 --rc genhtml_function_coverage=1 00:08:56.251 --rc genhtml_legend=1 00:08:56.251 --rc geninfo_all_blocks=1 00:08:56.251 --rc geninfo_unexecuted_blocks=1 00:08:56.251 00:08:56.251 ' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.251 --rc genhtml_branch_coverage=1 00:08:56.251 --rc genhtml_function_coverage=1 00:08:56.251 --rc genhtml_legend=1 00:08:56.251 --rc geninfo_all_blocks=1 00:08:56.251 --rc geninfo_unexecuted_blocks=1 00:08:56.251 00:08:56.251 ' 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.251 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.252 16:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.935 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:02.936 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:02.936 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:02.936 Found net devices under 0000:86:00.0: cvl_0_0 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:02.936 Found net devices under 0000:86:00.1: cvl_0_1 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:09:02.936 00:09:02.936 --- 10.0.0.2 ping statistics --- 00:09:02.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.936 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:09:02.936 00:09:02.936 --- 10.0.0.1 ping statistics --- 00:09:02.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.936 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2621536 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2621536 00:09:02.936 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.937 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2621536 ']' 00:09:02.937 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.937 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.937 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.937 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.937 16:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.937 [2024-11-20 16:02:02.851054] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:02.937 [2024-11-20 16:02:02.851098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.937 [2024-11-20 16:02:02.932072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.937 [2024-11-20 16:02:02.976380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.937 [2024-11-20 16:02:02.976417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.937 [2024-11-20 16:02:02.976425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.937 [2024-11-20 16:02:02.976433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.937 [2024-11-20 16:02:02.976442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.937 [2024-11-20 16:02:02.978013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.937 [2024-11-20 16:02:02.978057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.937 [2024-11-20 16:02:02.978074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.937 [2024-11-20 16:02:02.978081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.937 [2024-11-20 16:02:03.289666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:02.937 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.194 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:03.194 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.452 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:03.452 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:03.709 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.967 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:03.967 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.225 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:04.225 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.225 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:04.225 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:04.483 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.740 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.740 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.998 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.998 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.998 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.256 [2024-11-20 16:02:05.999492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.256 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:05.513 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:05.770 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.701 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:06.701 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:06.701 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.701 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:06.701 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:06.701 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:09.232 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:09.232 [global] 00:09:09.232 thread=1 00:09:09.232 invalidate=1 00:09:09.232 rw=write 00:09:09.232 time_based=1 00:09:09.232 runtime=1 00:09:09.232 ioengine=libaio 00:09:09.232 direct=1 00:09:09.232 bs=4096 00:09:09.232 iodepth=1 00:09:09.232 norandommap=0 00:09:09.232 numjobs=1 00:09:09.232 00:09:09.232 verify_dump=1 00:09:09.232 verify_backlog=512 00:09:09.232 verify_state_save=0 00:09:09.232 do_verify=1 00:09:09.232 verify=crc32c-intel 00:09:09.232 [job0] 00:09:09.232 filename=/dev/nvme0n1 00:09:09.232 [job1] 00:09:09.232 filename=/dev/nvme0n2 00:09:09.232 [job2] 00:09:09.232 filename=/dev/nvme0n3 00:09:09.232 [job3] 00:09:09.232 filename=/dev/nvme0n4 00:09:09.232 Could not set queue depth (nvme0n1) 00:09:09.232 Could not set queue depth (nvme0n2) 00:09:09.232 Could not set queue depth (nvme0n3) 00:09:09.232 Could not set queue depth (nvme0n4) 00:09:09.232 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.232 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.232 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.232 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.232 fio-3.35 00:09:09.232 Starting 4 threads 00:09:10.606 00:09:10.606 job0: (groupid=0, jobs=1): err= 0: pid=2622884: Wed Nov 20 16:02:11 2024 00:09:10.606 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:10.606 slat (nsec): min=6799, max=27557, avg=8192.41, stdev=3037.84 00:09:10.606 clat (usec): min=165, max=41064, avg=1713.52, stdev=7712.21 00:09:10.606 lat (usec): min=173, max=41083, avg=1721.71, stdev=7714.88 00:09:10.606 clat percentiles (usec): 00:09:10.606 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 190], 00:09:10.606 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:09:10.606 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 233], 00:09:10.606 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:10.606 | 99.99th=[41157] 00:09:10.606 write: IOPS=616, BW=2466KiB/s (2525kB/s)(2468KiB/1001msec); 0 zone resets 00:09:10.606 slat (nsec): min=9803, max=49225, avg=13784.58, stdev=3955.87 00:09:10.606 clat (usec): min=122, max=364, avg=172.63, stdev=20.45 00:09:10.606 lat (usec): min=133, max=401, avg=186.42, stdev=21.82 00:09:10.606 clat percentiles (usec): 00:09:10.606 | 1.00th=[ 128], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 161], 00:09:10.606 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:10.606 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:09:10.606 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 363], 99.95th=[ 363], 00:09:10.606 | 99.99th=[ 363] 00:09:10.606 bw ( KiB/s): min= 4096, max= 4096, per=36.18%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.606 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.606 lat (usec) : 250=97.96%, 500=0.35% 00:09:10.606 lat (msec) : 50=1.68% 00:09:10.606 cpu : usr=0.70%, sys=1.70%, ctx=1133, majf=0, minf=1 00:09:10.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.606 issued rwts: total=512,617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.607 job1: (groupid=0, jobs=1): err= 0: pid=2622889: Wed Nov 20 16:02:11 2024 00:09:10.607 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:10.607 slat (nsec): min=11619, max=22742, avg=20257.14, stdev=3488.62 00:09:10.607 clat (usec): min=40574, max=41085, avg=40950.24, stdev=105.87 00:09:10.607 lat (usec): min=40586, max=41107, avg=40970.50, stdev=107.32 00:09:10.607 clat percentiles (usec): 00:09:10.607 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:10.607 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.607 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.607 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:10.607 | 99.99th=[41157] 00:09:10.607 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:10.607 slat (nsec): min=10205, max=48697, avg=11739.63, stdev=2272.63 00:09:10.607 clat (usec): min=144, max=3122, avg=185.92, stdev=131.35 00:09:10.607 lat (usec): min=156, max=3141, avg=197.66, stdev=131.77 00:09:10.607 clat percentiles (usec): 00:09:10.607 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:09:10.607 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:09:10.607 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 215], 00:09:10.607 | 99.00th=[ 241], 99.50th=[ 293], 99.90th=[ 3130], 99.95th=[ 3130], 00:09:10.607 | 99.99th=[ 3130] 00:09:10.607 bw ( KiB/s): min= 4096, max= 4096, per=36.18%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.607 lat (usec) : 250=95.13%, 500=0.56% 00:09:10.607 lat (msec) : 4=0.19%, 50=4.12% 00:09:10.607 cpu : usr=0.60%, sys=0.80%, ctx=534, majf=0, minf=2 00:09:10.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.607 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.607 job2: (groupid=0, jobs=1): err= 0: pid=2622891: Wed Nov 20 16:02:11 2024 00:09:10.607 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:10.607 slat (nsec): min=6725, max=25900, avg=8342.28, stdev=2990.37 00:09:10.607 clat (usec): min=195, max=41956, avg=1618.73, stdev=7169.68 00:09:10.607 lat (usec): min=203, max=41980, avg=1627.07, stdev=7172.16 00:09:10.607 clat percentiles (usec): 00:09:10.607 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:09:10.607 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 281], 00:09:10.607 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 449], 00:09:10.607 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:10.607 | 99.99th=[42206] 00:09:10.607 write: IOPS=688, BW=2753KiB/s (2819kB/s)(2756KiB/1001msec); 0 zone resets 00:09:10.607 slat (usec): min=9, max=24763, avg=47.46, stdev=942.99 00:09:10.607 clat (usec): min=129, max=394, avg=190.52, stdev=35.57 00:09:10.607 lat (usec): min=140, max=25038, avg=237.98, stdev=946.87 00:09:10.607 clat percentiles (usec): 00:09:10.607 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:09:10.607 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:09:10.607 | 70.00th=[ 198], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:09:10.607 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 396], 99.95th=[ 396], 00:09:10.607 | 99.99th=[ 396] 00:09:10.607 bw ( KiB/s): min= 4096, max= 4096, per=36.18%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.607 lat (usec) : 250=64.36%, 500=34.14% 00:09:10.607 lat (msec) : 2=0.08%, 50=1.42% 00:09:10.607 cpu : usr=0.90%, sys=0.90%, ctx=1204, majf=0, minf=1 00:09:10.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.607 issued rwts: total=512,689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.607 job3: (groupid=0, jobs=1): err= 0: pid=2622892: Wed Nov 20 16:02:11 2024 00:09:10.607 read: IOPS=728, BW=2912KiB/s (2982kB/s)(2924KiB/1004msec) 00:09:10.607 slat (nsec): min=6489, max=23139, avg=7936.52, stdev=2407.25 00:09:10.607 clat (usec): min=164, max=41988, avg=1098.16, stdev=5973.33 00:09:10.607 lat (usec): min=171, max=42011, avg=1106.10, stdev=5975.32 00:09:10.607 clat percentiles (usec): 00:09:10.607 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:10.607 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:09:10.607 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 253], 00:09:10.607 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:10.607 | 99.99th=[42206] 00:09:10.607 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:09:10.607 slat (nsec): min=9346, max=45271, avg=11080.44, stdev=2901.43 00:09:10.607 clat (usec): min=118, max=377, avg=175.91, stdev=40.64 00:09:10.607 lat (usec): min=128, max=395, avg=186.99, stdev=41.18 00:09:10.607 clat percentiles (usec): 00:09:10.607 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 00:09:10.607 | 30.00th=[ 147], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 180], 00:09:10.607 | 70.00th=[ 190], 80.00th=[ 204], 90.00th=[ 243], 95.00th=[ 253], 00:09:10.607 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 379], 00:09:10.607 | 99.99th=[ 379] 00:09:10.607 bw ( KiB/s): min= 4096, max= 4096, per=36.18%, avg=4096.00, stdev= 0.00, samples=2 00:09:10.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:10.607 lat (usec) : 250=94.13%, 500=4.96% 00:09:10.607 lat (msec) : 50=0.91% 00:09:10.607 cpu : usr=0.70%, sys=1.89%, ctx=1755, majf=0, minf=2 00:09:10.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.607 issued rwts: total=731,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.607 00:09:10.607 Run status group 0 (all jobs): 00:09:10.607 READ: bw=7080KiB/s (7250kB/s), 87.6KiB/s-2912KiB/s (89.8kB/s-2982kB/s), io=7108KiB (7279kB), run=1001-1004msec 00:09:10.607 WRITE: bw=11.1MiB/s (11.6MB/s), 2040KiB/s-4080KiB/s (2089kB/s-4178kB/s), io=11.1MiB (11.6MB), run=1001-1004msec 00:09:10.607 00:09:10.607 Disk stats (read/write): 00:09:10.607 nvme0n1: ios=71/512, merge=0/0, ticks=1702/86, in_queue=1788, util=98.00% 00:09:10.607 nvme0n2: ios=33/512, merge=0/0, ticks=745/95, in_queue=840, util=86.88% 00:09:10.607 nvme0n3: ios=319/512, merge=0/0, ticks=1693/101, in_queue=1794, util=98.43% 00:09:10.607 nvme0n4: ios=512/671, merge=0/0, ticks=716/121, in_queue=837, util=89.59% 00:09:10.607 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:10.607 [global] 00:09:10.607 thread=1 00:09:10.607 invalidate=1 00:09:10.607 rw=randwrite 00:09:10.607 time_based=1 00:09:10.607 runtime=1 00:09:10.607 ioengine=libaio 00:09:10.607 direct=1 00:09:10.607 bs=4096 00:09:10.607 iodepth=1 00:09:10.607 norandommap=0 00:09:10.607 numjobs=1 00:09:10.607 00:09:10.607 verify_dump=1 00:09:10.607 verify_backlog=512 00:09:10.607 verify_state_save=0 00:09:10.607 do_verify=1 00:09:10.607 verify=crc32c-intel 00:09:10.607 [job0] 00:09:10.607 filename=/dev/nvme0n1 00:09:10.607 [job1] 00:09:10.607 filename=/dev/nvme0n2 00:09:10.607 [job2] 00:09:10.607 filename=/dev/nvme0n3 00:09:10.607 [job3] 00:09:10.607 filename=/dev/nvme0n4 00:09:10.607 Could not set queue depth (nvme0n1) 00:09:10.607 Could not set queue depth (nvme0n2) 00:09:10.607 Could not set queue depth (nvme0n3) 00:09:10.607 Could not set queue depth (nvme0n4) 00:09:10.865 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.865 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.865 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.865 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.865 fio-3.35 00:09:10.865 Starting 4 threads 00:09:12.241 00:09:12.241 job0: (groupid=0, jobs=1): err= 0: pid=2623260: Wed Nov 20 16:02:12 2024 00:09:12.241 read: IOPS=2000, BW=8004KiB/s (8196kB/s)(8268KiB/1033msec) 00:09:12.241 slat (nsec): min=6823, max=25829, avg=8049.61, stdev=1143.66 00:09:12.241 clat (usec): min=183, max=40574, avg=270.54, stdev=887.77 00:09:12.241 lat (usec): min=191, max=40587, avg=278.59, stdev=887.87 00:09:12.241 clat percentiles (usec): 00:09:12.241 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:09:12.241 | 30.00th=[ 223], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 269], 00:09:12.241 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:09:12.241 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 693], 99.95th=[ 693], 00:09:12.241 | 99.99th=[40633] 00:09:12.241 write: IOPS=2478, BW=9913KiB/s (10.1MB/s)(10.0MiB/1033msec); 0 zone resets 00:09:12.241 slat (nsec): min=8641, max=61667, avg=10728.27, stdev=1608.66 00:09:12.241 clat (usec): min=119, max=356, avg=162.72, stdev=18.66 00:09:12.241 lat (usec): min=129, max=418, avg=173.45, stdev=19.18 00:09:12.241 clat percentiles (usec): 00:09:12.241 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:12.241 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:09:12.241 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 198], 00:09:12.241 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 269], 99.95th=[ 273], 00:09:12.241 | 99.99th=[ 359] 00:09:12.241 bw ( KiB/s): min=10152, max=10328, per=43.25%, avg=10240.00, stdev=124.45, samples=2 00:09:12.241 iops : min= 2538, max= 2582, avg=2560.00, stdev=31.11, samples=2 00:09:12.241 lat (usec) : 250=73.91%, 500=26.02%, 750=0.04% 00:09:12.241 lat (msec) : 50=0.02% 00:09:12.241 cpu : usr=3.10%, sys=6.98%, ctx=4628, majf=0, minf=1 00:09:12.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.241 issued rwts: total=2067,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.241 job1: (groupid=0, jobs=1): err= 0: pid=2623261: Wed Nov 20 16:02:12 2024 00:09:12.241 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:09:12.241 slat (nsec): min=8449, max=29442, avg=18703.14, stdev=6413.67 00:09:12.241 clat (usec): min=40775, max=42071, avg=41015.41, stdev=247.17 00:09:12.241 lat (usec): min=40785, max=42083, avg=41034.11, stdev=245.46 00:09:12.241 clat percentiles (usec): 00:09:12.241 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:12.241 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:12.241 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:12.241 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:12.241 | 99.99th=[42206] 00:09:12.241 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:12.241 slat (nsec): min=9688, max=37143, avg=10687.64, stdev=1619.41 00:09:12.241 clat (usec): min=128, max=304, avg=188.37, stdev=15.57 00:09:12.241 lat (usec): min=138, max=342, avg=199.05, stdev=16.02 00:09:12.241 clat percentiles (usec): 00:09:12.241 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:09:12.241 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:09:12.241 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:09:12.241 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 306], 99.95th=[ 306], 00:09:12.241 | 99.99th=[ 306] 00:09:12.241 bw ( KiB/s): min= 4096, max= 4096, per=17.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.241 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.241 lat (usec) : 250=95.51%, 500=0.37% 00:09:12.241 lat (msec) : 50=4.12% 00:09:12.241 cpu : usr=0.10%, sys=0.60%, ctx=536, majf=0, minf=1 00:09:12.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.242 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.242 job2: (groupid=0, jobs=1): err= 0: pid=2623262: Wed Nov 20 16:02:12 2024 00:09:12.242 read: IOPS=2004, BW=8020KiB/s (8212kB/s)(8196KiB/1022msec) 00:09:12.242 slat (nsec): min=7389, max=38806, avg=8462.98, stdev=1395.08 00:09:12.242 clat (usec): min=176, max=41314, avg=248.92, stdev=907.80 00:09:12.242 lat (usec): min=184, max=41324, avg=257.39, stdev=907.83 00:09:12.242 clat percentiles (usec): 00:09:12.242 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:09:12.242 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:09:12.242 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:09:12.242 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 285], 00:09:12.242 | 99.99th=[41157] 00:09:12.242 write: IOPS=2504, BW=9.78MiB/s (10.3MB/s)(10.0MiB/1022msec); 0 zone resets 00:09:12.242 slat (nsec): min=10094, max=41488, avg=11279.28, stdev=1629.76 00:09:12.242 clat (usec): min=130, max=3482, avg=176.41, stdev=71.52 00:09:12.242 lat (usec): min=140, max=3493, avg=187.69, stdev=71.57 00:09:12.242 clat percentiles (usec): 00:09:12.242 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:12.242 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:12.242 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 260], 00:09:12.242 | 99.00th=[ 281], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 326], 00:09:12.242 | 99.99th=[ 3490] 00:09:12.242 bw ( KiB/s): min= 9944, max=10536, per=43.25%, avg=10240.00, stdev=418.61, samples=2 00:09:12.242 iops : min= 2486, max= 2634, avg=2560.00, stdev=104.65, samples=2 00:09:12.242 lat (usec) : 250=91.86%, 500=8.09% 00:09:12.242 lat (msec) : 4=0.02%, 50=0.02% 00:09:12.242 cpu : usr=3.23%, sys=7.74%, ctx=4609, majf=0, minf=1 00:09:12.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.242 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.242 job3: (groupid=0, jobs=1): err= 0: pid=2623264: Wed Nov 20 16:02:12 2024 00:09:12.242 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:09:12.242 slat (nsec): min=11593, max=25930, avg=16613.18, stdev=4177.83 00:09:12.242 clat (usec): min=40878, max=41991, avg=41125.27, stdev=348.11 00:09:12.242 lat (usec): min=40891, max=42005, avg=41141.89, stdev=346.84 00:09:12.242 clat percentiles (usec): 00:09:12.242 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:12.242 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:12.242 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:12.242 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:12.242 | 99.99th=[42206] 00:09:12.242 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:12.242 slat (nsec): min=11198, max=43833, avg=13114.98, stdev=2401.95 00:09:12.242 clat (usec): min=215, max=863, avg=242.63, stdev=38.30 00:09:12.242 lat (usec): min=234, max=875, avg=255.74, stdev=38.28 00:09:12.242 clat percentiles (usec): 00:09:12.242 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 235], 00:09:12.242 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:09:12.242 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:12.242 | 99.00th=[ 293], 99.50th=[ 586], 99.90th=[ 865], 99.95th=[ 865], 00:09:12.242 | 99.99th=[ 865] 00:09:12.242 bw ( KiB/s): min= 4096, max= 4096, per=17.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.242 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.242 lat (usec) : 250=92.70%, 500=2.62%, 750=0.37%, 1000=0.19% 00:09:12.242 lat (msec) : 50=4.12% 00:09:12.242 cpu : usr=0.19%, sys=0.77%, ctx=535, majf=0, minf=1 00:09:12.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.242 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.242 00:09:12.242 Run status group 0 (all jobs): 00:09:12.242 READ: bw=15.7MiB/s (16.4MB/s), 84.8KiB/s-8020KiB/s (86.8kB/s-8212kB/s), io=16.2MiB (17.0MB), run=1006-1038msec 00:09:12.242 WRITE: bw=23.1MiB/s (24.2MB/s), 1973KiB/s-9.78MiB/s (2020kB/s-10.3MB/s), io=24.0MiB (25.2MB), run=1006-1038msec 00:09:12.242 00:09:12.242 Disk stats (read/write): 00:09:12.242 nvme0n1: ios=1901/2048, merge=0/0, ticks=500/322, in_queue=822, util=88.38% 00:09:12.242 nvme0n2: ios=42/512, merge=0/0, ticks=1722/92, in_queue=1814, util=98.48% 00:09:12.242 nvme0n3: ios=1939/2048, merge=0/0, ticks=888/334, in_queue=1222, util=95.42% 00:09:12.242 nvme0n4: ios=75/512, merge=0/0, ticks=1123/123, in_queue=1246, util=98.43% 00:09:12.242 16:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:12.242 [global] 00:09:12.242 thread=1 00:09:12.242 invalidate=1 00:09:12.242 rw=write 00:09:12.242 time_based=1 00:09:12.242 runtime=1 00:09:12.242 ioengine=libaio 00:09:12.242 direct=1 00:09:12.242 bs=4096 00:09:12.242 iodepth=128 00:09:12.242 norandommap=0 00:09:12.242 numjobs=1 00:09:12.242 00:09:12.242 verify_dump=1 00:09:12.242 verify_backlog=512 00:09:12.242 verify_state_save=0 00:09:12.242 do_verify=1 00:09:12.242 verify=crc32c-intel 00:09:12.242 [job0] 00:09:12.242 filename=/dev/nvme0n1 00:09:12.242 [job1] 00:09:12.242 filename=/dev/nvme0n2 00:09:12.242 [job2] 00:09:12.242 filename=/dev/nvme0n3 00:09:12.242 [job3] 00:09:12.242 filename=/dev/nvme0n4 00:09:12.242 Could not set queue depth (nvme0n1) 00:09:12.242 Could not set queue depth (nvme0n2) 00:09:12.242 Could not set queue depth (nvme0n3) 00:09:12.242 Could not set queue depth (nvme0n4) 00:09:12.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.242 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.242 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.242 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.242 fio-3.35 00:09:12.242 Starting 4 threads 00:09:13.619 00:09:13.619 job0: (groupid=0, jobs=1): err= 0: pid=2623639: Wed Nov 20 16:02:14 2024 00:09:13.619 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:09:13.619 slat (nsec): min=1097, max=15898k, avg=143169.47, stdev=1042696.11 00:09:13.619 clat (usec): min=5418, max=52479, avg=18356.36, stdev=12277.38 00:09:13.619 lat (usec): min=5423, max=52485, avg=18499.53, stdev=12352.20 00:09:13.619 clat percentiles (usec): 00:09:13.619 | 1.00th=[ 5538], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[ 9765], 00:09:13.619 | 30.00th=[10028], 40.00th=[10290], 50.00th=[11600], 60.00th=[17433], 00:09:13.619 | 70.00th=[19792], 80.00th=[34341], 90.00th=[40109], 95.00th=[43779], 00:09:13.619 | 99.00th=[45876], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:09:13.619 | 99.99th=[52691] 00:09:13.619 write: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1012msec); 0 zone resets 00:09:13.619 slat (nsec): min=1901, max=14357k, avg=124313.57, stdev=731688.25 00:09:13.619 clat (usec): min=1124, max=48872, avg=16778.71, stdev=10317.29 00:09:13.619 lat (usec): min=1132, max=48881, avg=16903.03, stdev=10381.15 00:09:13.619 clat percentiles (usec): 00:09:13.619 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 8225], 20.00th=[ 9503], 00:09:13.619 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11863], 60.00th=[16057], 00:09:13.619 | 70.00th=[21365], 80.00th=[23725], 90.00th=[30540], 95.00th=[39584], 00:09:13.619 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:09:13.619 | 99.99th=[49021] 00:09:13.620 bw ( KiB/s): min=12288, max=16440, per=22.56%, avg=14364.00, stdev=2935.91, samples=2 00:09:13.620 iops : min= 3072, max= 4110, avg=3591.00, stdev=733.98, samples=2 00:09:13.620 lat (msec) : 2=0.38%, 10=28.76%, 20=41.72%, 50=28.71%, 100=0.43% 00:09:13.620 cpu : usr=2.27%, sys=4.55%, ctx=295, majf=0, minf=2 00:09:13.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:13.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.620 issued rwts: total=3584,3710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.620 job1: (groupid=0, jobs=1): err= 0: pid=2623640: Wed Nov 20 16:02:14 2024 00:09:13.620 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:09:13.620 slat (nsec): min=1169, max=16078k, avg=141253.23, stdev=944875.10 00:09:13.620 clat (usec): min=5079, max=44154, avg=15611.43, stdev=6362.75 00:09:13.620 lat (usec): min=5084, max=44159, avg=15752.69, stdev=6440.57 00:09:13.620 clat percentiles (usec): 00:09:13.620 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10683], 20.00th=[11338], 00:09:13.620 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13304], 60.00th=[14877], 00:09:13.620 | 70.00th=[15664], 80.00th=[18220], 90.00th=[23725], 95.00th=[27657], 00:09:13.620 | 99.00th=[41157], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:09:13.620 | 99.99th=[44303] 00:09:13.620 write: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1010msec); 0 zone resets 00:09:13.620 slat (usec): min=2, max=30516, avg=211.63, stdev=1152.97 00:09:13.620 clat (usec): min=1712, max=122455, avg=29037.66, stdev=24766.44 00:09:13.620 lat (usec): min=1725, max=122460, avg=29249.30, stdev=24896.46 00:09:13.620 clat percentiles (msec): 00:09:13.620 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 16], 00:09:13.620 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 22], 00:09:13.620 | 70.00th=[ 26], 80.00th=[ 42], 90.00th=[ 64], 95.00th=[ 93], 00:09:13.620 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:09:13.620 | 99.99th=[ 123] 00:09:13.620 bw ( KiB/s): min= 7816, max=14328, per=17.39%, avg=11072.00, stdev=4604.68, samples=2 00:09:13.620 iops : min= 1954, max= 3582, avg=2768.00, stdev=1151.17, samples=2 00:09:13.620 lat (msec) : 2=0.37%, 4=0.70%, 10=5.87%, 20=63.32%, 50=21.19% 00:09:13.620 lat (msec) : 100=7.00%, 250=1.56% 00:09:13.620 cpu : usr=2.58%, sys=2.97%, ctx=363, majf=0, minf=1 00:09:13.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:13.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.620 issued rwts: total=2560,2895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.620 job2: (groupid=0, jobs=1): err= 0: pid=2623641: Wed Nov 20 16:02:14 2024 00:09:13.620 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:09:13.620 slat (nsec): min=1352, max=22568k, avg=133312.60, stdev=1000127.64 00:09:13.620 clat (usec): min=4825, max=53343, avg=16047.29, stdev=7443.13 00:09:13.620 lat (usec): min=4831, max=53353, avg=16180.60, stdev=7519.40 00:09:13.620 clat percentiles (usec): 00:09:13.620 | 1.00th=[ 6587], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[10552], 00:09:13.620 | 30.00th=[10945], 40.00th=[12518], 50.00th=[13435], 60.00th=[15795], 00:09:13.620 | 70.00th=[18744], 80.00th=[20317], 90.00th=[23725], 95.00th=[28181], 00:09:13.620 | 99.00th=[47973], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:09:13.620 | 99.99th=[53216] 00:09:13.620 write: IOPS=3688, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec); 0 zone resets 00:09:13.620 slat (usec): min=2, max=14346, avg=134.60, stdev=658.43 00:09:13.620 clat (usec): min=1519, max=53305, avg=18842.19, stdev=8496.47 00:09:13.620 lat (usec): min=1535, max=53309, avg=18976.79, stdev=8560.73 00:09:13.620 clat percentiles (usec): 00:09:13.620 | 1.00th=[ 4228], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9765], 00:09:13.620 | 30.00th=[13435], 40.00th=[17433], 50.00th=[17957], 60.00th=[18482], 00:09:13.620 | 70.00th=[22152], 80.00th=[25035], 90.00th=[31065], 95.00th=[35914], 00:09:13.620 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[53216], 00:09:13.620 | 99.99th=[53216] 00:09:13.620 bw ( KiB/s): min=12360, max=16368, per=22.56%, avg=14364.00, stdev=2834.08, samples=2 00:09:13.620 iops : min= 3090, max= 4092, avg=3591.00, stdev=708.52, samples=2 00:09:13.620 lat (msec) : 2=0.03%, 4=0.37%, 10=14.64%, 20=55.44%, 50=29.20% 00:09:13.620 lat (msec) : 100=0.32% 00:09:13.620 cpu : usr=3.09%, sys=4.49%, ctx=390, majf=0, minf=1 00:09:13.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:13.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.620 issued rwts: total=3584,3703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.620 job3: (groupid=0, jobs=1): err= 0: pid=2623642: Wed Nov 20 16:02:14 2024 00:09:13.620 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:09:13.620 slat (nsec): min=1122, max=8595.1k, avg=86235.33, stdev=527200.53 00:09:13.620 clat (usec): min=336, max=19806, avg=11520.58, stdev=2215.72 00:09:13.620 lat (usec): min=343, max=19819, avg=11606.81, stdev=2257.20 00:09:13.620 clat percentiles (usec): 00:09:13.620 | 1.00th=[ 5211], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:13.620 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:09:13.620 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14484], 95.00th=[15533], 00:09:13.620 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19530], 99.95th=[19530], 00:09:13.620 | 99.99th=[19792] 00:09:13.620 write: IOPS=5740, BW=22.4MiB/s (23.5MB/s)(22.6MiB/1010msec); 0 zone resets 00:09:13.620 slat (nsec): min=1947, max=7517.6k, avg=73822.01, stdev=466573.65 00:09:13.620 clat (usec): min=224, max=27166, avg=10873.27, stdev=3187.02 00:09:13.620 lat (usec): min=334, max=27169, avg=10947.09, stdev=3226.99 00:09:13.620 clat percentiles (usec): 00:09:13.620 | 1.00th=[ 1254], 5.00th=[ 4817], 10.00th=[ 6718], 20.00th=[ 9503], 00:09:13.620 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11338], 00:09:13.620 | 70.00th=[11731], 80.00th=[12518], 90.00th=[13960], 95.00th=[17433], 00:09:13.620 | 99.00th=[18482], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:09:13.620 | 99.99th=[27132] 00:09:13.620 bw ( KiB/s): min=22584, max=22784, per=35.63%, avg=22684.00, stdev=141.42, samples=2 00:09:13.620 iops : min= 5646, max= 5696, avg=5671.00, stdev=35.36, samples=2 00:09:13.620 lat (usec) : 250=0.01%, 500=0.34%, 750=0.23%, 1000=0.10% 00:09:13.620 lat (msec) : 2=0.26%, 4=1.20%, 10=21.43%, 20=76.11%, 50=0.32% 00:09:13.620 cpu : usr=3.47%, sys=8.03%, ctx=482, majf=0, minf=1 00:09:13.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:13.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.620 issued rwts: total=5632,5798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.620 00:09:13.620 Run status group 0 (all jobs): 00:09:13.620 READ: bw=59.3MiB/s (62.2MB/s), 9.90MiB/s-21.8MiB/s (10.4MB/s-22.8MB/s), io=60.0MiB (62.9MB), run=1004-1012msec 00:09:13.620 WRITE: bw=62.2MiB/s (65.2MB/s), 11.2MiB/s-22.4MiB/s (11.7MB/s-23.5MB/s), io=62.9MiB (66.0MB), run=1004-1012msec 00:09:13.620 00:09:13.620 Disk stats (read/write): 00:09:13.620 nvme0n1: ios=2673/3072, merge=0/0, ticks=17473/17025, in_queue=34498, util=86.57% 00:09:13.620 nvme0n2: ios=2476/2560, merge=0/0, ticks=26861/38434, in_queue=65295, util=98.27% 00:09:13.620 nvme0n3: ios=3072/3351, merge=0/0, ticks=42538/59706, in_queue=102244, util=88.92% 00:09:13.620 nvme0n4: ios=4666/4655, merge=0/0, ticks=30577/27365, in_queue=57942, util=98.42% 00:09:13.620 16:02:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:13.620 [global] 00:09:13.620 thread=1 00:09:13.620 invalidate=1 00:09:13.620 rw=randwrite 00:09:13.620 time_based=1 00:09:13.620 runtime=1 00:09:13.620 ioengine=libaio 00:09:13.620 direct=1 00:09:13.620 bs=4096 00:09:13.620 iodepth=128 00:09:13.620 norandommap=0 00:09:13.620 numjobs=1 00:09:13.620 00:09:13.620 verify_dump=1 00:09:13.620 verify_backlog=512 00:09:13.620 verify_state_save=0 00:09:13.620 do_verify=1 00:09:13.620 verify=crc32c-intel 00:09:13.620 [job0] 00:09:13.620 filename=/dev/nvme0n1 00:09:13.620 [job1] 00:09:13.620 filename=/dev/nvme0n2 00:09:13.620 [job2] 00:09:13.620 filename=/dev/nvme0n3 00:09:13.620 [job3] 00:09:13.620 filename=/dev/nvme0n4 00:09:13.620 Could not set queue depth (nvme0n1) 00:09:13.620 Could not set queue depth (nvme0n2) 00:09:13.620 Could not set queue depth (nvme0n3) 00:09:13.620 Could not set queue depth (nvme0n4) 00:09:13.880 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.880 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.880 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.880 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.880 fio-3.35 00:09:13.880 Starting 4 threads 00:09:15.388 00:09:15.388 job0: (groupid=0, jobs=1): err= 0: pid=2624010: Wed Nov 20 16:02:15 2024 00:09:15.388 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:15.388 slat (nsec): min=1121, max=21751k, avg=138754.19, stdev=923157.54 00:09:15.388 clat (usec): min=6842, max=45226, avg=16280.13, stdev=6122.10 00:09:15.388 lat (usec): min=6848, max=45234, avg=16418.89, stdev=6212.82 00:09:15.388 clat percentiles (usec): 00:09:15.388 | 1.00th=[ 7635], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[11469], 00:09:15.388 | 30.00th=[11994], 40.00th=[14353], 50.00th=[14877], 60.00th=[15795], 00:09:15.388 | 70.00th=[16909], 80.00th=[22152], 90.00th=[25297], 95.00th=[27919], 00:09:15.388 | 99.00th=[35914], 99.50th=[37487], 99.90th=[45351], 99.95th=[45351], 00:09:15.388 | 99.99th=[45351] 00:09:15.388 write: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1007msec); 0 zone resets 00:09:15.388 slat (nsec): min=1912, max=10561k, avg=172321.68, stdev=723161.31 00:09:15.388 clat (usec): min=1332, max=64035, avg=24129.11, stdev=13470.81 00:09:15.388 lat (usec): min=1342, max=64046, avg=24301.43, stdev=13555.72 00:09:15.388 clat percentiles (usec): 00:09:15.388 | 1.00th=[ 5604], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10421], 00:09:15.388 | 30.00th=[12256], 40.00th=[19268], 50.00th=[22152], 60.00th=[25297], 00:09:15.388 | 70.00th=[30278], 80.00th=[35914], 90.00th=[44303], 95.00th=[48497], 00:09:15.388 | 99.00th=[60556], 99.50th=[61080], 99.90th=[64226], 99.95th=[64226], 00:09:15.388 | 99.99th=[64226] 00:09:15.388 bw ( KiB/s): min= 9856, max=14840, per=19.55%, avg=12348.00, stdev=3524.22, samples=2 00:09:15.388 iops : min= 2464, max= 3710, avg=3087.00, stdev=881.06, samples=2 00:09:15.388 lat (msec) : 2=0.08%, 10=9.83%, 20=49.76%, 50=38.40%, 100=1.93% 00:09:15.388 cpu : usr=2.29%, sys=3.78%, ctx=391, majf=0, minf=1 00:09:15.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.388 issued rwts: total=3072,3212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.388 job1: (groupid=0, jobs=1): err= 0: pid=2624011: Wed Nov 20 16:02:15 2024 00:09:15.388 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:15.388 slat (nsec): min=1072, max=14865k, avg=118159.98, stdev=886972.73 00:09:15.388 clat (usec): min=3344, max=42677, avg=14912.49, stdev=6207.04 00:09:15.388 lat (usec): min=3381, max=45175, avg=15030.65, stdev=6285.86 00:09:15.388 clat percentiles (usec): 00:09:15.388 | 1.00th=[ 6718], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[10159], 00:09:15.388 | 30.00th=[10552], 40.00th=[10814], 50.00th=[12911], 60.00th=[14877], 00:09:15.388 | 70.00th=[17695], 80.00th=[19792], 90.00th=[23987], 95.00th=[27395], 00:09:15.388 | 99.00th=[32113], 99.50th=[34866], 99.90th=[39060], 99.95th=[40109], 00:09:15.388 | 99.99th=[42730] 00:09:15.388 write: IOPS=4203, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1003msec); 0 zone resets 00:09:15.388 slat (nsec): min=1824, max=32605k, avg=113745.79, stdev=943154.83 00:09:15.388 clat (usec): min=1968, max=48143, avg=15448.14, stdev=9455.12 00:09:15.388 lat (usec): min=2207, max=48166, avg=15561.89, stdev=9502.75 00:09:15.388 clat percentiles (usec): 00:09:15.388 | 1.00th=[ 3064], 5.00th=[ 4883], 10.00th=[ 7373], 20.00th=[10028], 00:09:15.388 | 30.00th=[10945], 40.00th=[11863], 50.00th=[12387], 60.00th=[13173], 00:09:15.388 | 70.00th=[15008], 80.00th=[18744], 90.00th=[29754], 95.00th=[40633], 00:09:15.388 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47973], 99.95th=[47973], 00:09:15.388 | 99.99th=[47973] 00:09:15.388 bw ( KiB/s): min=16384, max=16384, per=25.94%, avg=16384.00, stdev= 0.00, samples=2 00:09:15.388 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:15.388 lat (msec) : 2=0.01%, 4=2.01%, 10=15.27%, 20=63.67%, 50=19.04% 00:09:15.388 cpu : usr=2.59%, sys=3.79%, ctx=324, majf=0, minf=1 00:09:15.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.388 issued rwts: total=4096,4216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.388 job2: (groupid=0, jobs=1): err= 0: pid=2624012: Wed Nov 20 16:02:15 2024 00:09:15.388 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:15.388 slat (nsec): min=1982, max=10587k, avg=169193.79, stdev=903619.44 00:09:15.388 clat (usec): min=8432, max=43200, avg=21152.17, stdev=7778.84 00:09:15.388 lat (usec): min=8439, max=45398, avg=21321.37, stdev=7859.77 00:09:15.388 clat percentiles (usec): 00:09:15.388 | 1.00th=[10159], 5.00th=[11600], 10.00th=[12125], 20.00th=[14222], 00:09:15.388 | 30.00th=[16319], 40.00th=[17957], 50.00th=[18744], 60.00th=[20841], 00:09:15.388 | 70.00th=[24773], 80.00th=[28705], 90.00th=[33424], 95.00th=[35914], 00:09:15.388 | 99.00th=[39584], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:15.389 | 99.99th=[43254] 00:09:15.389 write: IOPS=2992, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec); 0 zone resets 00:09:15.389 slat (usec): min=2, max=9923, avg=179.56, stdev=806.41 00:09:15.389 clat (usec): min=5090, max=47789, avg=24093.41, stdev=10645.24 00:09:15.389 lat (usec): min=5101, max=47795, avg=24272.97, stdev=10722.65 00:09:15.389 clat percentiles (usec): 00:09:15.389 | 1.00th=[ 7701], 5.00th=[10683], 10.00th=[11207], 20.00th=[13042], 00:09:15.389 | 30.00th=[15270], 40.00th=[19530], 50.00th=[21103], 60.00th=[26870], 00:09:15.389 | 70.00th=[31851], 80.00th=[35390], 90.00th=[38536], 95.00th=[41681], 00:09:15.389 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:09:15.389 | 99.99th=[47973] 00:09:15.389 bw ( KiB/s): min= 9096, max=13968, per=18.26%, avg=11532.00, stdev=3445.02, samples=2 00:09:15.389 iops : min= 2274, max= 3492, avg=2883.00, stdev=861.26, samples=2 00:09:15.389 lat (msec) : 10=2.21%, 20=47.61%, 50=50.18% 00:09:15.389 cpu : usr=2.19%, sys=5.17%, ctx=306, majf=0, minf=1 00:09:15.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:15.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.389 issued rwts: total=2560,3010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.389 job3: (groupid=0, jobs=1): err= 0: pid=2624013: Wed Nov 20 16:02:15 2024 00:09:15.389 read: IOPS=5716, BW=22.3MiB/s (23.4MB/s)(23.4MiB/1050msec) 00:09:15.389 slat (nsec): min=1427, max=10141k, avg=87916.58, stdev=622995.38 00:09:15.389 clat (usec): min=3445, max=54084, avg=11843.00, stdev=6545.10 00:09:15.389 lat (usec): min=3451, max=61140, avg=11930.92, stdev=6570.16 00:09:15.389 clat percentiles (usec): 00:09:15.389 | 1.00th=[ 3982], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[ 8979], 00:09:15.389 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10945], 00:09:15.389 | 70.00th=[11731], 80.00th=[13960], 90.00th=[16581], 95.00th=[18744], 00:09:15.389 | 99.00th=[50070], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:09:15.389 | 99.99th=[54264] 00:09:15.389 write: IOPS=5851, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1050msec); 0 zone resets 00:09:15.389 slat (usec): min=2, max=5100, avg=64.52, stdev=208.45 00:09:15.389 clat (usec): min=369, max=58923, avg=10108.91, stdev=5627.10 00:09:15.389 lat (usec): min=401, max=58926, avg=10173.44, stdev=5645.94 00:09:15.389 clat percentiles (usec): 00:09:15.389 | 1.00th=[ 1205], 5.00th=[ 3523], 10.00th=[ 4621], 20.00th=[ 7373], 00:09:15.389 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:09:15.389 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12387], 95.00th=[20317], 00:09:15.389 | 99.00th=[35914], 99.50th=[47973], 99.90th=[53740], 99.95th=[58983], 00:09:15.389 | 99.99th=[58983] 00:09:15.389 bw ( KiB/s): min=24560, max=24592, per=38.90%, avg=24576.00, stdev=22.63, samples=2 00:09:15.389 iops : min= 6140, max= 6148, avg=6144.00, stdev= 5.66, samples=2 00:09:15.389 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.07% 00:09:15.389 lat (msec) : 2=0.77%, 4=3.01%, 10=54.42%, 20=36.93%, 50=3.99% 00:09:15.389 lat (msec) : 100=0.75% 00:09:15.389 cpu : usr=3.05%, sys=6.29%, ctx=837, majf=0, minf=2 00:09:15.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:15.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.389 issued rwts: total=6002,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.389 00:09:15.389 Run status group 0 (all jobs): 00:09:15.389 READ: bw=58.5MiB/s (61.4MB/s), 9.94MiB/s-22.3MiB/s (10.4MB/s-23.4MB/s), io=61.4MiB (64.4MB), run=1003-1050msec 00:09:15.389 WRITE: bw=61.7MiB/s (64.7MB/s), 11.7MiB/s-22.9MiB/s (12.3MB/s-24.0MB/s), io=64.8MiB (67.9MB), run=1003-1050msec 00:09:15.389 00:09:15.389 Disk stats (read/write): 00:09:15.389 nvme0n1: ios=2612/2679, merge=0/0, ticks=23645/34621, in_queue=58266, util=98.10% 00:09:15.389 nvme0n2: ios=3319/3584, merge=0/0, ticks=26149/27777, in_queue=53926, util=97.46% 00:09:15.389 nvme0n3: ios=2193/2560, merge=0/0, ticks=19240/21351, in_queue=40591, util=97.09% 00:09:15.389 nvme0n4: ios=5120/5207, merge=0/0, ticks=54188/51225, in_queue=105413, util=89.62% 00:09:15.389 16:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:15.389 16:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2624245 00:09:15.389 16:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:15.389 16:02:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:15.389 [global] 00:09:15.389 thread=1 00:09:15.389 invalidate=1 00:09:15.389 rw=read 00:09:15.389 time_based=1 00:09:15.389 runtime=10 00:09:15.389 ioengine=libaio 00:09:15.389 direct=1 00:09:15.389 bs=4096 00:09:15.389 iodepth=1 00:09:15.389 norandommap=1 00:09:15.389 numjobs=1 00:09:15.389 00:09:15.389 [job0] 00:09:15.389 filename=/dev/nvme0n1 00:09:15.389 [job1] 00:09:15.389 filename=/dev/nvme0n2 00:09:15.389 [job2] 00:09:15.389 filename=/dev/nvme0n3 00:09:15.389 [job3] 00:09:15.389 filename=/dev/nvme0n4 00:09:15.389 Could not set queue depth (nvme0n1) 00:09:15.389 Could not set queue depth (nvme0n2) 00:09:15.389 Could not set queue depth (nvme0n3) 00:09:15.389 Could not set queue depth (nvme0n4) 00:09:15.647 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.647 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.647 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.647 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.647 fio-3.35 00:09:15.647 Starting 4 threads 00:09:18.173 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:18.431 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:18.431 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=397312, buflen=4096 00:09:18.431 fio: pid=2624425, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.688 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.688 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:18.688 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=368640, buflen=4096 00:09:18.688 fio: pid=2624419, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.946 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51691520, buflen=4096 00:09:18.946 fio: pid=2624396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.946 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.946 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:18.946 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5947392, buflen=4096 00:09:18.946 fio: pid=2624405, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.946 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.946 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:19.204 00:09:19.204 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2624396: Wed Nov 20 16:02:19 2024 00:09:19.204 read: IOPS=3999, BW=15.6MiB/s (16.4MB/s)(49.3MiB/3156msec) 00:09:19.204 slat (usec): min=6, max=12540, avg=10.71, stdev=187.13 00:09:19.204 clat (usec): min=155, max=41058, avg=236.40, stdev=891.70 00:09:19.204 lat (usec): min=163, max=41081, avg=247.11, stdev=912.14 00:09:19.204 clat percentiles (usec): 00:09:19.204 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:09:19.204 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 225], 00:09:19.204 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:09:19.204 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 1237], 99.95th=[ 9241], 00:09:19.204 | 99.99th=[41157] 00:09:19.204 bw ( KiB/s): min= 7824, max=19632, per=95.33%, avg=16096.33, stdev=4360.66, samples=6 00:09:19.204 iops : min= 1956, max= 4908, avg=4024.00, stdev=1090.14, samples=6 00:09:19.204 lat (usec) : 250=84.41%, 500=15.44%, 750=0.01% 00:09:19.204 lat (msec) : 2=0.07%, 10=0.01%, 50=0.05% 00:09:19.204 cpu : usr=1.20%, sys=4.37%, ctx=12626, majf=0, minf=1 00:09:19.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 issued rwts: total=12621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.204 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2624405: Wed Nov 20 16:02:19 2024 00:09:19.204 read: IOPS=430, BW=1719KiB/s (1761kB/s)(5808KiB/3378msec) 00:09:19.204 slat (usec): min=6, max=4814, avg=11.33, stdev=126.17 00:09:19.204 clat (usec): min=158, max=41891, avg=2298.66, stdev=9034.93 00:09:19.204 lat (usec): min=165, max=45978, avg=2309.98, stdev=9053.35 00:09:19.204 clat percentiles (usec): 00:09:19.204 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:09:19.204 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:09:19.204 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[40633], 00:09:19.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:19.204 | 99.99th=[41681] 00:09:19.204 bw ( KiB/s): min= 93, max=11032, per=11.39%, avg=1923.50, stdev=4462.24, samples=6 00:09:19.204 iops : min= 23, max= 2758, avg=480.83, stdev=1115.58, samples=6 00:09:19.204 lat (usec) : 250=93.81%, 500=0.96% 00:09:19.204 lat (msec) : 50=5.16% 00:09:19.204 cpu : usr=0.18%, sys=0.38%, ctx=1455, majf=0, minf=2 00:09:19.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 issued rwts: total=1453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.204 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2624419: Wed Nov 20 16:02:19 2024 00:09:19.204 read: IOPS=30, BW=121KiB/s (123kB/s)(360KiB/2985msec) 00:09:19.204 slat (nsec): min=8892, max=31436, avg=20760.86, stdev=5484.60 00:09:19.204 clat (usec): min=277, max=41889, avg=32826.93, stdev=16341.48 00:09:19.204 lat (usec): min=302, max=41913, avg=32847.68, stdev=16340.58 00:09:19.204 clat percentiles (usec): 00:09:19.204 | 1.00th=[ 277], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 523], 00:09:19.204 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:19.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:19.204 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:19.204 | 99.99th=[41681] 00:09:19.204 bw ( KiB/s): min= 112, max= 144, per=0.73%, avg=124.80, stdev=17.53, samples=5 00:09:19.204 iops : min= 28, max= 36, avg=31.20, stdev= 4.38, samples=5 00:09:19.204 lat (usec) : 500=18.68%, 750=1.10% 00:09:19.204 lat (msec) : 50=79.12% 00:09:19.204 cpu : usr=0.13%, sys=0.00%, ctx=91, majf=0, minf=2 00:09:19.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.204 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2624425: Wed Nov 20 16:02:19 2024 00:09:19.204 read: IOPS=35, BW=142KiB/s (145kB/s)(388KiB/2736msec) 00:09:19.204 slat (nsec): min=8639, max=31938, avg=18731.35, stdev=6166.49 00:09:19.204 clat (usec): min=208, max=41478, avg=27963.59, stdev=19097.79 00:09:19.204 lat (usec): min=218, max=41486, avg=27982.30, stdev=19097.93 00:09:19.204 clat percentiles (usec): 00:09:19.204 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 241], 00:09:19.204 | 30.00th=[ 277], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:19.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:19.204 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:19.204 | 99.99th=[41681] 00:09:19.204 bw ( KiB/s): min= 104, max= 152, per=0.81%, avg=136.00, stdev=19.60, samples=5 00:09:19.204 iops : min= 26, max= 38, avg=34.00, stdev= 4.90, samples=5 00:09:19.204 lat (usec) : 250=23.47%, 500=8.16% 00:09:19.204 lat (msec) : 50=67.35% 00:09:19.204 cpu : usr=0.00%, sys=0.15%, ctx=98, majf=0, minf=1 00:09:19.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.204 issued rwts: total=98,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.204 00:09:19.204 Run status group 0 (all jobs): 00:09:19.204 READ: bw=16.5MiB/s (17.3MB/s), 121KiB/s-15.6MiB/s (123kB/s-16.4MB/s), io=55.7MiB (58.4MB), run=2736-3378msec 00:09:19.204 00:09:19.204 Disk stats (read/write): 00:09:19.204 nvme0n1: ios=12503/0, merge=0/0, ticks=2878/0, in_queue=2878, util=94.48% 00:09:19.204 nvme0n2: ios=1451/0, merge=0/0, ticks=3292/0, in_queue=3292, util=96.24% 00:09:19.204 nvme0n3: ios=87/0, merge=0/0, ticks=2834/0, in_queue=2834, util=96.55% 00:09:19.204 nvme0n4: ios=94/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.44% 00:09:19.204 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.204 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:19.462 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.462 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:19.720 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.720 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:19.978 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.978 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2624245 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:20.236 nvmf hotplug test: fio failed as expected 00:09:20.236 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.493 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:20.493 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:20.493 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:20.493 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:20.493 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:20.493 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.494 rmmod nvme_tcp 00:09:20.494 rmmod nvme_fabrics 00:09:20.494 rmmod nvme_keyring 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2621536 ']' 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2621536 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2621536 ']' 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2621536 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621536 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621536' 00:09:20.494 killing process with pid 2621536 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2621536 00:09:20.494 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2621536 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.753 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.292 00:09:23.292 real 0m26.929s 00:09:23.292 user 1m47.412s 00:09:23.292 sys 0m8.242s 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.292 ************************************ 00:09:23.292 END TEST nvmf_fio_target 00:09:23.292 ************************************ 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.292 ************************************ 00:09:23.292 START TEST nvmf_bdevio 00:09:23.292 ************************************ 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:23.292 * Looking for test storage... 00:09:23.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.292 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.293 --rc genhtml_branch_coverage=1 00:09:23.293 --rc genhtml_function_coverage=1 00:09:23.293 --rc genhtml_legend=1 00:09:23.293 --rc geninfo_all_blocks=1 00:09:23.293 --rc geninfo_unexecuted_blocks=1 00:09:23.293 00:09:23.293 ' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.293 --rc genhtml_branch_coverage=1 00:09:23.293 --rc genhtml_function_coverage=1 00:09:23.293 --rc genhtml_legend=1 00:09:23.293 --rc geninfo_all_blocks=1 00:09:23.293 --rc geninfo_unexecuted_blocks=1 00:09:23.293 00:09:23.293 ' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.293 --rc genhtml_branch_coverage=1 00:09:23.293 --rc genhtml_function_coverage=1 00:09:23.293 --rc genhtml_legend=1 00:09:23.293 --rc geninfo_all_blocks=1 00:09:23.293 --rc geninfo_unexecuted_blocks=1 00:09:23.293 00:09:23.293 ' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.293 --rc genhtml_branch_coverage=1 00:09:23.293 --rc genhtml_function_coverage=1 00:09:23.293 --rc genhtml_legend=1 00:09:23.293 --rc geninfo_all_blocks=1 00:09:23.293 --rc geninfo_unexecuted_blocks=1 00:09:23.293 00:09:23.293 ' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:23.293 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:23.294 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.294 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:29.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:29.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:29.866 Found net devices under 0000:86:00.0: cvl_0_0 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:29.866 Found net devices under 0000:86:00.1: cvl_0_1 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.866 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:09:29.867 00:09:29.867 --- 10.0.0.2 ping statistics --- 00:09:29.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.867 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:29.867 00:09:29.867 --- 10.0.0.1 ping statistics --- 00:09:29.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.867 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2628869 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2628869 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2628869 ']' 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.867 16:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 [2024-11-20 16:02:29.820145] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:29.867 [2024-11-20 16:02:29.820191] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.867 [2024-11-20 16:02:29.899840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.867 [2024-11-20 16:02:29.942213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.867 [2024-11-20 16:02:29.942249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.867 [2024-11-20 16:02:29.942256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.867 [2024-11-20 16:02:29.942262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.867 [2024-11-20 16:02:29.942268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.867 [2024-11-20 16:02:29.943796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.867 [2024-11-20 16:02:29.943823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:29.867 [2024-11-20 16:02:29.943911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.867 [2024-11-20 16:02:29.943911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 [2024-11-20 16:02:30.093861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 Malloc0 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.867 [2024-11-20 16:02:30.157865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.867 { 00:09:29.867 "params": { 00:09:29.867 "name": "Nvme$subsystem", 00:09:29.867 "trtype": "$TEST_TRANSPORT", 00:09:29.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.867 "adrfam": "ipv4", 00:09:29.867 "trsvcid": "$NVMF_PORT", 00:09:29.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.867 "hdgst": ${hdgst:-false}, 00:09:29.867 "ddgst": ${ddgst:-false} 00:09:29.867 }, 00:09:29.867 "method": "bdev_nvme_attach_controller" 00:09:29.867 } 00:09:29.867 EOF 00:09:29.867 )") 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:29.867 16:02:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.867 "params": { 00:09:29.867 "name": "Nvme1", 00:09:29.867 "trtype": "tcp", 00:09:29.867 "traddr": "10.0.0.2", 00:09:29.867 "adrfam": "ipv4", 00:09:29.867 "trsvcid": "4420", 00:09:29.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.867 "hdgst": false, 00:09:29.867 "ddgst": false 00:09:29.867 }, 00:09:29.867 "method": "bdev_nvme_attach_controller" 00:09:29.867 }' 00:09:29.867 [2024-11-20 16:02:30.209012] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:29.868 [2024-11-20 16:02:30.209057] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628896 ] 00:09:29.868 [2024-11-20 16:02:30.285555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.868 [2024-11-20 16:02:30.329765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.868 [2024-11-20 16:02:30.329875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.868 [2024-11-20 16:02:30.329876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.868 I/O targets: 00:09:29.868 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:29.868 00:09:29.868 00:09:29.868 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.868 http://cunit.sourceforge.net/ 00:09:29.868 00:09:29.868 00:09:29.868 Suite: bdevio tests on: Nvme1n1 00:09:29.868 Test: blockdev write read block ...passed 00:09:29.868 Test: blockdev write zeroes read block ...passed 00:09:29.868 Test: blockdev write zeroes read no split ...passed 00:09:29.868 Test: blockdev write zeroes read split ...passed 00:09:29.868 Test: blockdev write zeroes read split partial ...passed 00:09:29.868 Test: blockdev reset ...[2024-11-20 16:02:30.643879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:29.868 [2024-11-20 16:02:30.643943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf6340 (9): Bad file descriptor 00:09:29.868 [2024-11-20 16:02:30.655440] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:29.868 passed 00:09:29.868 Test: blockdev write read 8 blocks ...passed 00:09:29.868 Test: blockdev write read size > 128k ...passed 00:09:29.868 Test: blockdev write read invalid size ...passed 00:09:30.124 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.124 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.124 Test: blockdev write read max offset ...passed 00:09:30.124 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.124 Test: blockdev writev readv 8 blocks ...passed 00:09:30.124 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.124 Test: blockdev writev readv block ...passed 00:09:30.124 Test: blockdev writev readv size > 128k ...passed 00:09:30.124 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.124 Test: blockdev comparev and writev ...[2024-11-20 16:02:30.908699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.908727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.908742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.908750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.908994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.909010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.909021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.909028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.909276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.909286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.909297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.909305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.909538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.909548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:30.124 [2024-11-20 16:02:30.909559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.124 [2024-11-20 16:02:30.909566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:30.124 passed 00:09:30.382 Test: blockdev nvme passthru rw ...passed 00:09:30.382 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:02:30.991288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.382 [2024-11-20 16:02:30.991310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:30.382 [2024-11-20 16:02:30.991421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.382 [2024-11-20 16:02:30.991431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:30.382 [2024-11-20 16:02:30.991540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.382 [2024-11-20 16:02:30.991550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:30.382 [2024-11-20 16:02:30.991651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.382 [2024-11-20 16:02:30.991660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:30.382 passed 00:09:30.382 Test: blockdev nvme admin passthru ...passed 00:09:30.382 Test: blockdev copy ...passed 00:09:30.382 00:09:30.382 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.382 suites 1 1 n/a 0 0 00:09:30.382 tests 23 23 23 0 0 00:09:30.382 asserts 152 152 152 0 n/a 00:09:30.382 00:09:30.382 Elapsed time = 1.123 seconds 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.382 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.382 rmmod nvme_tcp 00:09:30.641 rmmod nvme_fabrics 00:09:30.641 rmmod nvme_keyring 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2628869 ']' 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2628869 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2628869 ']' 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2628869 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628869 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628869' 00:09:30.641 killing process with pid 2628869 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2628869 00:09:30.641 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2628869 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.900 16:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.808 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.808 00:09:32.808 real 0m9.974s 00:09:32.808 user 0m9.906s 00:09:32.808 sys 0m4.959s 00:09:32.808 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.808 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.808 ************************************ 00:09:32.808 END TEST nvmf_bdevio 00:09:32.808 ************************************ 00:09:32.808 16:02:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:32.808 00:09:32.808 real 4m37.393s 00:09:32.808 user 10m25.008s 00:09:32.808 sys 1m38.194s 00:09:32.808 16:02:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.808 16:02:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.808 ************************************ 00:09:32.808 END TEST nvmf_target_core 00:09:32.808 ************************************ 00:09:33.068 16:02:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.068 16:02:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.068 16:02:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.068 16:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.068 ************************************ 00:09:33.068 START TEST nvmf_target_extra 00:09:33.068 ************************************ 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.068 * Looking for test storage... 00:09:33.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.068 --rc genhtml_branch_coverage=1 00:09:33.068 --rc genhtml_function_coverage=1 00:09:33.068 --rc genhtml_legend=1 00:09:33.068 --rc geninfo_all_blocks=1 00:09:33.068 --rc geninfo_unexecuted_blocks=1 00:09:33.068 00:09:33.068 ' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.068 --rc genhtml_branch_coverage=1 00:09:33.068 --rc genhtml_function_coverage=1 00:09:33.068 --rc genhtml_legend=1 00:09:33.068 --rc geninfo_all_blocks=1 00:09:33.068 --rc geninfo_unexecuted_blocks=1 00:09:33.068 00:09:33.068 ' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.068 --rc genhtml_branch_coverage=1 00:09:33.068 --rc genhtml_function_coverage=1 00:09:33.068 --rc genhtml_legend=1 00:09:33.068 --rc geninfo_all_blocks=1 00:09:33.068 --rc geninfo_unexecuted_blocks=1 00:09:33.068 00:09:33.068 ' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.068 --rc genhtml_branch_coverage=1 00:09:33.068 --rc genhtml_function_coverage=1 00:09:33.068 --rc genhtml_legend=1 00:09:33.068 --rc geninfo_all_blocks=1 00:09:33.068 --rc geninfo_unexecuted_blocks=1 00:09:33.068 00:09:33.068 ' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.068 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.069 16:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:33.329 ************************************ 00:09:33.329 START TEST nvmf_example 00:09:33.329 ************************************ 00:09:33.329 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.329 * Looking for test storage... 00:09:33.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.330 --rc genhtml_branch_coverage=1 00:09:33.330 --rc genhtml_function_coverage=1 00:09:33.330 --rc genhtml_legend=1 00:09:33.330 --rc geninfo_all_blocks=1 00:09:33.330 --rc geninfo_unexecuted_blocks=1 00:09:33.330 00:09:33.330 ' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.330 --rc genhtml_branch_coverage=1 00:09:33.330 --rc genhtml_function_coverage=1 00:09:33.330 --rc genhtml_legend=1 00:09:33.330 --rc geninfo_all_blocks=1 00:09:33.330 --rc geninfo_unexecuted_blocks=1 00:09:33.330 00:09:33.330 ' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.330 --rc genhtml_branch_coverage=1 00:09:33.330 --rc genhtml_function_coverage=1 00:09:33.330 --rc genhtml_legend=1 00:09:33.330 --rc geninfo_all_blocks=1 00:09:33.330 --rc geninfo_unexecuted_blocks=1 00:09:33.330 00:09:33.330 ' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.330 --rc genhtml_branch_coverage=1 00:09:33.330 --rc genhtml_function_coverage=1 00:09:33.330 --rc genhtml_legend=1 00:09:33.330 --rc geninfo_all_blocks=1 00:09:33.330 --rc geninfo_unexecuted_blocks=1 00:09:33.330 00:09:33.330 ' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.330 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.331 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:39.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:39.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:39.901 Found net devices under 0000:86:00.0: cvl_0_0 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.901 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:39.902 Found net devices under 0000:86:00.1: cvl_0_1 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.902 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:09:39.902 00:09:39.902 --- 10.0.0.2 ping statistics --- 00:09:39.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.902 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:09:39.902 00:09:39.902 --- 10.0.0.1 ping statistics --- 00:09:39.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.902 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2632716 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2632716 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2632716 ']' 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.902 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.469 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.469 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:40.469 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:40.469 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.469 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.469 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:40.470 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:50.509 Initializing NVMe Controllers 00:09:50.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:50.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:50.509 Initialization complete. Launching workers. 00:09:50.509 ======================================================== 00:09:50.509 Latency(us) 00:09:50.509 Device Information : IOPS MiB/s Average min max 00:09:50.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18286.80 71.43 3499.35 697.27 15388.21 00:09:50.509 ======================================================== 00:09:50.509 Total : 18286.80 71.43 3499.35 697.27 15388.21 00:09:50.509 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:50.509 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.769 rmmod nvme_tcp 00:09:50.769 rmmod nvme_fabrics 00:09:50.769 rmmod nvme_keyring 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2632716 ']' 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2632716 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2632716 ']' 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2632716 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632716 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:50.769 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632716' 00:09:50.770 killing process with pid 2632716 00:09:50.770 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2632716 00:09:50.770 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2632716 00:09:51.029 nvmf threads initialize successfully 00:09:51.029 bdev subsystem init successfully 00:09:51.029 created a nvmf target service 00:09:51.029 create targets's poll groups done 00:09:51.029 all subsystems of target started 00:09:51.029 nvmf target is running 00:09:51.029 all subsystems of target stopped 00:09:51.029 destroy targets's poll groups done 00:09:51.029 destroyed the nvmf target service 00:09:51.029 bdev subsystem finish successfully 00:09:51.029 nvmf threads destroy successfully 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.029 16:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.935 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.935 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:52.935 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.935 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.935 00:09:52.935 real 0m19.833s 00:09:52.935 user 0m45.880s 00:09:52.935 sys 0m6.185s 00:09:52.935 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.935 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.935 ************************************ 00:09:52.935 END TEST nvmf_example 00:09:52.935 ************************************ 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:53.195 ************************************ 00:09:53.195 START TEST nvmf_filesystem 00:09:53.195 ************************************ 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:53.195 * Looking for test storage... 00:09:53.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.195 16:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.195 --rc genhtml_branch_coverage=1 00:09:53.195 --rc genhtml_function_coverage=1 00:09:53.195 --rc genhtml_legend=1 00:09:53.195 --rc geninfo_all_blocks=1 00:09:53.195 --rc geninfo_unexecuted_blocks=1 00:09:53.195 00:09:53.195 ' 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.195 --rc genhtml_branch_coverage=1 00:09:53.195 --rc genhtml_function_coverage=1 00:09:53.195 --rc genhtml_legend=1 00:09:53.195 --rc geninfo_all_blocks=1 00:09:53.195 --rc geninfo_unexecuted_blocks=1 00:09:53.195 00:09:53.195 ' 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.195 --rc genhtml_branch_coverage=1 00:09:53.195 --rc genhtml_function_coverage=1 00:09:53.195 --rc genhtml_legend=1 00:09:53.195 --rc geninfo_all_blocks=1 00:09:53.195 --rc geninfo_unexecuted_blocks=1 00:09:53.195 00:09:53.195 ' 00:09:53.195 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.196 --rc genhtml_branch_coverage=1 00:09:53.196 --rc genhtml_function_coverage=1 00:09:53.196 --rc genhtml_legend=1 00:09:53.196 --rc geninfo_all_blocks=1 00:09:53.196 --rc geninfo_unexecuted_blocks=1 00:09:53.196 00:09:53.196 ' 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:53.196 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:53.459 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:53.459 #define SPDK_CONFIG_H 00:09:53.459 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:53.459 #define SPDK_CONFIG_APPS 1 00:09:53.459 #define SPDK_CONFIG_ARCH native 00:09:53.459 #undef SPDK_CONFIG_ASAN 00:09:53.459 #undef SPDK_CONFIG_AVAHI 00:09:53.459 #undef SPDK_CONFIG_CET 00:09:53.459 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:53.459 #define SPDK_CONFIG_COVERAGE 1 00:09:53.459 #define SPDK_CONFIG_CROSS_PREFIX 00:09:53.459 #undef SPDK_CONFIG_CRYPTO 00:09:53.459 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:53.459 #undef SPDK_CONFIG_CUSTOMOCF 00:09:53.459 #undef SPDK_CONFIG_DAOS 00:09:53.459 #define SPDK_CONFIG_DAOS_DIR 00:09:53.459 #define SPDK_CONFIG_DEBUG 1 00:09:53.459 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:53.459 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:53.459 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:53.459 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:53.459 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:53.459 #undef SPDK_CONFIG_DPDK_UADK 00:09:53.459 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:53.459 #define SPDK_CONFIG_EXAMPLES 1 00:09:53.459 #undef SPDK_CONFIG_FC 00:09:53.459 #define SPDK_CONFIG_FC_PATH 00:09:53.459 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:53.459 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:53.459 #define SPDK_CONFIG_FSDEV 1 00:09:53.459 #undef SPDK_CONFIG_FUSE 00:09:53.459 #undef SPDK_CONFIG_FUZZER 00:09:53.459 #define SPDK_CONFIG_FUZZER_LIB 00:09:53.459 #undef SPDK_CONFIG_GOLANG 00:09:53.459 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:53.459 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:53.459 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:53.459 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:53.459 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:53.460 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:53.460 #undef SPDK_CONFIG_HAVE_LZ4 00:09:53.460 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:53.460 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:53.460 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:53.460 #define SPDK_CONFIG_IDXD 1 00:09:53.460 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:53.460 #undef SPDK_CONFIG_IPSEC_MB 00:09:53.460 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:53.460 #define SPDK_CONFIG_ISAL 1 00:09:53.460 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:53.460 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:53.460 #define SPDK_CONFIG_LIBDIR 00:09:53.460 #undef SPDK_CONFIG_LTO 00:09:53.460 #define SPDK_CONFIG_MAX_LCORES 128 00:09:53.460 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:53.460 #define SPDK_CONFIG_NVME_CUSE 1 00:09:53.460 #undef SPDK_CONFIG_OCF 00:09:53.460 #define SPDK_CONFIG_OCF_PATH 00:09:53.460 #define SPDK_CONFIG_OPENSSL_PATH 00:09:53.460 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:53.460 #define SPDK_CONFIG_PGO_DIR 00:09:53.460 #undef SPDK_CONFIG_PGO_USE 00:09:53.460 #define SPDK_CONFIG_PREFIX /usr/local 00:09:53.460 #undef SPDK_CONFIG_RAID5F 00:09:53.460 #undef SPDK_CONFIG_RBD 00:09:53.460 #define SPDK_CONFIG_RDMA 1 00:09:53.460 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:53.460 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:53.460 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:53.460 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:53.460 #define SPDK_CONFIG_SHARED 1 00:09:53.460 #undef SPDK_CONFIG_SMA 00:09:53.460 #define SPDK_CONFIG_TESTS 1 00:09:53.460 #undef SPDK_CONFIG_TSAN 00:09:53.460 #define SPDK_CONFIG_UBLK 1 00:09:53.460 #define SPDK_CONFIG_UBSAN 1 00:09:53.460 #undef SPDK_CONFIG_UNIT_TESTS 00:09:53.460 #undef SPDK_CONFIG_URING 00:09:53.460 #define SPDK_CONFIG_URING_PATH 00:09:53.460 #undef SPDK_CONFIG_URING_ZNS 00:09:53.460 #undef SPDK_CONFIG_USDT 00:09:53.460 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:53.460 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:53.460 #define SPDK_CONFIG_VFIO_USER 1 00:09:53.460 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:53.460 #define SPDK_CONFIG_VHOST 1 00:09:53.460 #define SPDK_CONFIG_VIRTIO 1 00:09:53.460 #undef SPDK_CONFIG_VTUNE 00:09:53.460 #define SPDK_CONFIG_VTUNE_DIR 00:09:53.460 #define SPDK_CONFIG_WERROR 1 00:09:53.460 #define SPDK_CONFIG_WPDK_DIR 00:09:53.460 #undef SPDK_CONFIG_XNVME 00:09:53.460 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:53.460 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.461 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2635118 ]] 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2635118 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.qwnajM 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qwnajM/tests/target /tmp/spdk.qwnajM 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:53.462 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189200596992 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6763364352 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981300736 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=679936 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:53.463 * Looking for test storage... 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189200596992 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8977956864 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.463 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.464 --rc genhtml_branch_coverage=1 00:09:53.464 --rc genhtml_function_coverage=1 00:09:53.464 --rc genhtml_legend=1 00:09:53.464 --rc geninfo_all_blocks=1 00:09:53.464 --rc geninfo_unexecuted_blocks=1 00:09:53.464 00:09:53.464 ' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.464 --rc genhtml_branch_coverage=1 00:09:53.464 --rc genhtml_function_coverage=1 00:09:53.464 --rc genhtml_legend=1 00:09:53.464 --rc geninfo_all_blocks=1 00:09:53.464 --rc geninfo_unexecuted_blocks=1 00:09:53.464 00:09:53.464 ' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.464 --rc genhtml_branch_coverage=1 00:09:53.464 --rc genhtml_function_coverage=1 00:09:53.464 --rc genhtml_legend=1 00:09:53.464 --rc geninfo_all_blocks=1 00:09:53.464 --rc geninfo_unexecuted_blocks=1 00:09:53.464 00:09:53.464 ' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.464 --rc genhtml_branch_coverage=1 00:09:53.464 --rc genhtml_function_coverage=1 00:09:53.464 --rc genhtml_legend=1 00:09:53.464 --rc geninfo_all_blocks=1 00:09:53.464 --rc geninfo_unexecuted_blocks=1 00:09:53.464 00:09:53.464 ' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.464 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.465 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.033 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.033 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.033 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.033 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:10:00.034 00:10:00.034 --- 10.0.0.2 ping statistics --- 00:10:00.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.034 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:00.034 00:10:00.034 --- 10.0.0.1 ping statistics --- 00:10:00.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.034 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.034 ************************************ 00:10:00.034 START TEST nvmf_filesystem_no_in_capsule 00:10:00.034 ************************************ 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2638399 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2638399 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2638399 ']' 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.034 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.034 [2024-11-20 16:03:00.385334] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:10:00.034 [2024-11-20 16:03:00.385381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.034 [2024-11-20 16:03:00.467903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.034 [2024-11-20 16:03:00.513200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.034 [2024-11-20 16:03:00.513237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.034 [2024-11-20 16:03:00.513244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.034 [2024-11-20 16:03:00.513251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.034 [2024-11-20 16:03:00.513256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.034 [2024-11-20 16:03:00.514965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.034 [2024-11-20 16:03:00.515077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.034 [2024-11-20 16:03:00.515111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.034 [2024-11-20 16:03:00.515112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.599 [2024-11-20 16:03:01.276067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.599 Malloc1 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.599 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.599 [2024-11-20 16:03:01.426804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.600 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.600 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:00.600 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:00.600 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:00.857 { 00:10:00.857 "name": "Malloc1", 00:10:00.857 "aliases": [ 00:10:00.857 "094e8cbc-abe0-4734-85df-3452883bad48" 00:10:00.857 ], 00:10:00.857 "product_name": "Malloc disk", 00:10:00.857 "block_size": 512, 00:10:00.857 "num_blocks": 1048576, 00:10:00.857 "uuid": "094e8cbc-abe0-4734-85df-3452883bad48", 00:10:00.857 "assigned_rate_limits": { 00:10:00.857 "rw_ios_per_sec": 0, 00:10:00.857 "rw_mbytes_per_sec": 0, 00:10:00.857 "r_mbytes_per_sec": 0, 00:10:00.857 "w_mbytes_per_sec": 0 00:10:00.857 }, 00:10:00.857 "claimed": true, 00:10:00.857 "claim_type": "exclusive_write", 00:10:00.857 "zoned": false, 00:10:00.857 "supported_io_types": { 00:10:00.857 "read": true, 00:10:00.857 "write": true, 00:10:00.857 "unmap": true, 00:10:00.857 "flush": true, 00:10:00.857 "reset": true, 00:10:00.857 "nvme_admin": false, 00:10:00.857 "nvme_io": false, 00:10:00.857 "nvme_io_md": false, 00:10:00.857 "write_zeroes": true, 00:10:00.857 "zcopy": true, 00:10:00.857 "get_zone_info": false, 00:10:00.857 "zone_management": false, 00:10:00.857 "zone_append": false, 00:10:00.857 "compare": false, 00:10:00.857 "compare_and_write": false, 00:10:00.857 "abort": true, 00:10:00.857 "seek_hole": false, 00:10:00.857 "seek_data": false, 00:10:00.857 "copy": true, 00:10:00.857 "nvme_iov_md": false 00:10:00.857 }, 00:10:00.857 "memory_domains": [ 00:10:00.857 { 00:10:00.857 "dma_device_id": "system", 00:10:00.857 "dma_device_type": 1 00:10:00.857 }, 00:10:00.857 { 00:10:00.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.857 "dma_device_type": 2 00:10:00.857 } 00:10:00.857 ], 00:10:00.857 "driver_specific": {} 00:10:00.857 } 00:10:00.857 ]' 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:00.857 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.230 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.230 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:02.230 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.230 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:02.230 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:04.128 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:05.060 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.995 ************************************ 00:10:05.995 START TEST filesystem_ext4 00:10:05.995 ************************************ 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:05.995 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:05.995 mke2fs 1.47.0 (5-Feb-2023) 00:10:05.995 Discarding device blocks: 0/522240 done 00:10:05.995 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:05.995 Filesystem UUID: 11a2d6d7-1f7b-494a-9a04-7c896c06ba9f 00:10:05.995 Superblock backups stored on blocks: 00:10:05.995 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:05.995 00:10:05.995 Allocating group tables: 0/64 done 00:10:05.995 Writing inode tables: 0/64 done 00:10:09.271 Creating journal (8192 blocks): done 00:10:11.019 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:11.019 00:10:11.019 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:11.019 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2638399 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.568 00:10:17.568 real 0m11.263s 00:10:17.568 user 0m0.022s 00:10:17.568 sys 0m0.085s 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:17.568 ************************************ 00:10:17.568 END TEST filesystem_ext4 00:10:17.568 ************************************ 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.568 ************************************ 00:10:17.568 START TEST filesystem_btrfs 00:10:17.568 ************************************ 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:17.568 16:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:17.568 btrfs-progs v6.8.1 00:10:17.568 See https://btrfs.readthedocs.io for more information. 00:10:17.568 00:10:17.568 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:17.568 NOTE: several default settings have changed in version 5.15, please make sure 00:10:17.568 this does not affect your deployments: 00:10:17.568 - DUP for metadata (-m dup) 00:10:17.568 - enabled no-holes (-O no-holes) 00:10:17.568 - enabled free-space-tree (-R free-space-tree) 00:10:17.568 00:10:17.568 Label: (null) 00:10:17.568 UUID: 4d3bdf07-7f19-4940-8c95-33d20d1bab18 00:10:17.568 Node size: 16384 00:10:17.569 Sector size: 4096 (CPU page size: 4096) 00:10:17.569 Filesystem size: 510.00MiB 00:10:17.569 Block group profiles: 00:10:17.569 Data: single 8.00MiB 00:10:17.569 Metadata: DUP 32.00MiB 00:10:17.569 System: DUP 8.00MiB 00:10:17.569 SSD detected: yes 00:10:17.569 Zoned device: no 00:10:17.569 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:17.569 Checksum: crc32c 00:10:17.569 Number of devices: 1 00:10:17.569 Devices: 00:10:17.569 ID SIZE PATH 00:10:17.569 1 510.00MiB /dev/nvme0n1p1 00:10:17.569 00:10:17.569 16:03:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:17.569 16:03:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2638399 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.499 00:10:18.499 real 0m1.286s 00:10:18.499 user 0m0.022s 00:10:18.499 sys 0m0.121s 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:18.499 ************************************ 00:10:18.499 END TEST filesystem_btrfs 00:10:18.499 ************************************ 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.499 ************************************ 00:10:18.499 START TEST filesystem_xfs 00:10:18.499 ************************************ 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:18.499 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:18.757 16:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:18.757 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:18.757 = sectsz=512 attr=2, projid32bit=1 00:10:18.757 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:18.757 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:18.757 data = bsize=4096 blocks=130560, imaxpct=25 00:10:18.757 = sunit=0 swidth=0 blks 00:10:18.757 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:18.757 log =internal log bsize=4096 blocks=16384, version=2 00:10:18.757 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:18.757 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:19.690 Discarding blocks...Done. 00:10:19.690 16:03:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:19.690 16:03:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.063 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.063 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:21.063 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.063 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:21.063 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:21.063 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2638399 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.321 00:10:21.321 real 0m2.604s 00:10:21.321 user 0m0.016s 00:10:21.321 sys 0m0.083s 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:21.321 ************************************ 00:10:21.321 END TEST filesystem_xfs 00:10:21.321 ************************************ 00:10:21.321 16:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2638399 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2638399 ']' 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2638399 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.579 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2638399 00:10:21.837 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.837 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.837 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2638399' 00:10:21.837 killing process with pid 2638399 00:10:21.837 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2638399 00:10:21.837 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2638399 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:22.096 00:10:22.096 real 0m22.437s 00:10:22.096 user 1m28.577s 00:10:22.096 sys 0m1.545s 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.096 ************************************ 00:10:22.096 END TEST nvmf_filesystem_no_in_capsule 00:10:22.096 ************************************ 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.096 ************************************ 00:10:22.096 START TEST nvmf_filesystem_in_capsule 00:10:22.096 ************************************ 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2642812 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2642812 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2642812 ']' 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.096 16:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.096 [2024-11-20 16:03:22.890758] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:10:22.097 [2024-11-20 16:03:22.890802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.355 [2024-11-20 16:03:22.970306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.355 [2024-11-20 16:03:23.013473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.355 [2024-11-20 16:03:23.013509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.355 [2024-11-20 16:03:23.013516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.355 [2024-11-20 16:03:23.013522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.355 [2024-11-20 16:03:23.013527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.355 [2024-11-20 16:03:23.015043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.355 [2024-11-20 16:03:23.015138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.355 [2024-11-20 16:03:23.015249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.355 [2024-11-20 16:03:23.015250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.355 [2024-11-20 16:03:23.152877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.355 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 Malloc1 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 [2024-11-20 16:03:23.318243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.613 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:22.613 { 00:10:22.613 "name": "Malloc1", 00:10:22.613 "aliases": [ 00:10:22.613 "767a17ba-78ed-49ae-921a-c84c914f5626" 00:10:22.613 ], 00:10:22.613 "product_name": "Malloc disk", 00:10:22.613 "block_size": 512, 00:10:22.613 "num_blocks": 1048576, 00:10:22.613 "uuid": "767a17ba-78ed-49ae-921a-c84c914f5626", 00:10:22.613 "assigned_rate_limits": { 00:10:22.613 "rw_ios_per_sec": 0, 00:10:22.613 "rw_mbytes_per_sec": 0, 00:10:22.613 "r_mbytes_per_sec": 0, 00:10:22.613 "w_mbytes_per_sec": 0 00:10:22.613 }, 00:10:22.613 "claimed": true, 00:10:22.613 "claim_type": "exclusive_write", 00:10:22.613 "zoned": false, 00:10:22.614 "supported_io_types": { 00:10:22.614 "read": true, 00:10:22.614 "write": true, 00:10:22.614 "unmap": true, 00:10:22.614 "flush": true, 00:10:22.614 "reset": true, 00:10:22.614 "nvme_admin": false, 00:10:22.614 "nvme_io": false, 00:10:22.614 "nvme_io_md": false, 00:10:22.614 "write_zeroes": true, 00:10:22.614 "zcopy": true, 00:10:22.614 "get_zone_info": false, 00:10:22.614 "zone_management": false, 00:10:22.614 "zone_append": false, 00:10:22.614 "compare": false, 00:10:22.614 "compare_and_write": false, 00:10:22.614 "abort": true, 00:10:22.614 "seek_hole": false, 00:10:22.614 "seek_data": false, 00:10:22.614 "copy": true, 00:10:22.614 "nvme_iov_md": false 00:10:22.614 }, 00:10:22.614 "memory_domains": [ 00:10:22.614 { 00:10:22.614 "dma_device_id": "system", 00:10:22.614 "dma_device_type": 1 00:10:22.614 }, 00:10:22.614 { 00:10:22.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.614 "dma_device_type": 2 00:10:22.614 } 00:10:22.614 ], 00:10:22.614 "driver_specific": {} 00:10:22.614 } 00:10:22.614 ]' 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:22.614 16:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.987 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.987 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:23.987 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.987 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:23.987 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:25.887 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:26.145 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:26.403 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.777 ************************************ 00:10:27.777 START TEST filesystem_in_capsule_ext4 00:10:27.777 ************************************ 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:27.777 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:27.777 mke2fs 1.47.0 (5-Feb-2023) 00:10:27.777 Discarding device blocks: 0/522240 done 00:10:27.777 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:27.777 Filesystem UUID: 691eaa7c-adc9-4be5-af85-9dddb4314a61 00:10:27.777 Superblock backups stored on blocks: 00:10:27.777 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:27.777 00:10:27.777 Allocating group tables: 0/64 done 00:10:27.777 Writing inode tables: 0/64 done 00:10:27.777 Creating journal (8192 blocks): done 00:10:29.974 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:29.974 00:10:29.974 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:29.974 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2642812 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.538 00:10:36.538 real 0m8.339s 00:10:36.538 user 0m0.046s 00:10:36.538 sys 0m0.056s 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:36.538 ************************************ 00:10:36.538 END TEST filesystem_in_capsule_ext4 00:10:36.538 ************************************ 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.538 ************************************ 00:10:36.538 START TEST filesystem_in_capsule_btrfs 00:10:36.538 ************************************ 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:36.538 btrfs-progs v6.8.1 00:10:36.538 See https://btrfs.readthedocs.io for more information. 00:10:36.538 00:10:36.538 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:36.538 NOTE: several default settings have changed in version 5.15, please make sure 00:10:36.538 this does not affect your deployments: 00:10:36.538 - DUP for metadata (-m dup) 00:10:36.538 - enabled no-holes (-O no-holes) 00:10:36.538 - enabled free-space-tree (-R free-space-tree) 00:10:36.538 00:10:36.538 Label: (null) 00:10:36.538 UUID: 996d4dfe-d3d0-46f0-918f-a1504f15b31b 00:10:36.538 Node size: 16384 00:10:36.538 Sector size: 4096 (CPU page size: 4096) 00:10:36.538 Filesystem size: 510.00MiB 00:10:36.538 Block group profiles: 00:10:36.538 Data: single 8.00MiB 00:10:36.538 Metadata: DUP 32.00MiB 00:10:36.538 System: DUP 8.00MiB 00:10:36.538 SSD detected: yes 00:10:36.538 Zoned device: no 00:10:36.538 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:36.538 Checksum: crc32c 00:10:36.538 Number of devices: 1 00:10:36.538 Devices: 00:10:36.538 ID SIZE PATH 00:10:36.538 1 510.00MiB /dev/nvme0n1p1 00:10:36.538 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:36.538 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2642812 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.110 00:10:37.110 real 0m1.216s 00:10:37.110 user 0m0.029s 00:10:37.110 sys 0m0.116s 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.110 ************************************ 00:10:37.110 END TEST filesystem_in_capsule_btrfs 00:10:37.110 ************************************ 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.110 ************************************ 00:10:37.110 START TEST filesystem_in_capsule_xfs 00:10:37.110 ************************************ 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:37.110 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:37.369 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:37.369 = sectsz=512 attr=2, projid32bit=1 00:10:37.369 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:37.369 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:37.369 data = bsize=4096 blocks=130560, imaxpct=25 00:10:37.369 = sunit=0 swidth=0 blks 00:10:37.369 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:37.369 log =internal log bsize=4096 blocks=16384, version=2 00:10:37.369 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:37.369 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:38.305 Discarding blocks...Done. 00:10:38.305 16:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:38.305 16:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.207 16:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.207 16:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:40.207 16:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.207 16:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:40.207 16:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:40.207 16:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2642812 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.207 00:10:40.207 real 0m3.105s 00:10:40.207 user 0m0.027s 00:10:40.207 sys 0m0.072s 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.207 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:40.207 ************************************ 00:10:40.207 END TEST filesystem_in_capsule_xfs 00:10:40.207 ************************************ 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2642812 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2642812 ']' 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2642812 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.466 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642812 00:10:40.725 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.725 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.725 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642812' 00:10:40.725 killing process with pid 2642812 00:10:40.725 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2642812 00:10:40.725 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2642812 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:40.985 00:10:40.985 real 0m18.809s 00:10:40.985 user 1m14.055s 00:10:40.985 sys 0m1.421s 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.985 ************************************ 00:10:40.985 END TEST nvmf_filesystem_in_capsule 00:10:40.985 ************************************ 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.985 rmmod nvme_tcp 00:10:40.985 rmmod nvme_fabrics 00:10:40.985 rmmod nvme_keyring 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:40.985 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.986 16:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.524 00:10:43.524 real 0m49.991s 00:10:43.524 user 2m44.717s 00:10:43.524 sys 0m7.649s 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.524 ************************************ 00:10:43.524 END TEST nvmf_filesystem 00:10:43.524 ************************************ 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.524 ************************************ 00:10:43.524 START TEST nvmf_target_discovery 00:10:43.524 ************************************ 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:43.524 * Looking for test storage... 00:10:43.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.524 16:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:43.524 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.525 --rc genhtml_branch_coverage=1 00:10:43.525 --rc genhtml_function_coverage=1 00:10:43.525 --rc genhtml_legend=1 00:10:43.525 --rc geninfo_all_blocks=1 00:10:43.525 --rc geninfo_unexecuted_blocks=1 00:10:43.525 00:10:43.525 ' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.525 --rc genhtml_branch_coverage=1 00:10:43.525 --rc genhtml_function_coverage=1 00:10:43.525 --rc genhtml_legend=1 00:10:43.525 --rc geninfo_all_blocks=1 00:10:43.525 --rc geninfo_unexecuted_blocks=1 00:10:43.525 00:10:43.525 ' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.525 --rc genhtml_branch_coverage=1 00:10:43.525 --rc genhtml_function_coverage=1 00:10:43.525 --rc genhtml_legend=1 00:10:43.525 --rc geninfo_all_blocks=1 00:10:43.525 --rc geninfo_unexecuted_blocks=1 00:10:43.525 00:10:43.525 ' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.525 --rc genhtml_branch_coverage=1 00:10:43.525 --rc genhtml_function_coverage=1 00:10:43.525 --rc genhtml_legend=1 00:10:43.525 --rc geninfo_all_blocks=1 00:10:43.525 --rc geninfo_unexecuted_blocks=1 00:10:43.525 00:10:43.525 ' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.525 16:03:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.097 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.097 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.097 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.097 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.097 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.098 16:03:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:10:50.098 00:10:50.098 --- 10.0.0.2 ping statistics --- 00:10:50.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.098 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:10:50.098 00:10:50.098 --- 10.0.0.1 ping statistics --- 00:10:50.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.098 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2649557 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2649557 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2649557 ']' 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.098 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.098 [2024-11-20 16:03:50.151679] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:10:50.098 [2024-11-20 16:03:50.151725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.098 [2024-11-20 16:03:50.229985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.098 [2024-11-20 16:03:50.271308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.098 [2024-11-20 16:03:50.271346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.098 [2024-11-20 16:03:50.271353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.098 [2024-11-20 16:03:50.271360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.098 [2024-11-20 16:03:50.271365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.098 [2024-11-20 16:03:50.272983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.098 [2024-11-20 16:03:50.273082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.098 [2024-11-20 16:03:50.273188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.098 [2024-11-20 16:03:50.273189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.357 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.357 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:50.357 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.357 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.357 16:03:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 [2024-11-20 16:03:51.026744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 Null1 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 [2024-11-20 16:03:51.090071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 Null2 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 Null3 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 Null4 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.357 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.616 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:50.616 00:10:50.616 Discovery Log Number of Records 6, Generation counter 6 00:10:50.616 =====Discovery Log Entry 0====== 00:10:50.616 trtype: tcp 00:10:50.616 adrfam: ipv4 00:10:50.616 subtype: current discovery subsystem 00:10:50.616 treq: not required 00:10:50.616 portid: 0 00:10:50.616 trsvcid: 4420 00:10:50.616 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:50.616 traddr: 10.0.0.2 00:10:50.616 eflags: explicit discovery connections, duplicate discovery information 00:10:50.616 sectype: none 00:10:50.616 =====Discovery Log Entry 1====== 00:10:50.616 trtype: tcp 00:10:50.616 adrfam: ipv4 00:10:50.616 subtype: nvme subsystem 00:10:50.616 treq: not required 00:10:50.616 portid: 0 00:10:50.616 trsvcid: 4420 00:10:50.616 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:50.616 traddr: 10.0.0.2 00:10:50.616 eflags: none 00:10:50.616 sectype: none 00:10:50.616 =====Discovery Log Entry 2====== 00:10:50.616 trtype: tcp 00:10:50.616 adrfam: ipv4 00:10:50.616 subtype: nvme subsystem 00:10:50.616 treq: not required 00:10:50.616 portid: 0 00:10:50.616 trsvcid: 4420 00:10:50.616 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:50.616 traddr: 10.0.0.2 00:10:50.616 eflags: none 00:10:50.616 sectype: none 00:10:50.616 =====Discovery Log Entry 3====== 00:10:50.616 trtype: tcp 00:10:50.616 adrfam: ipv4 00:10:50.616 subtype: nvme subsystem 00:10:50.616 treq: not required 00:10:50.616 portid: 0 00:10:50.616 trsvcid: 4420 00:10:50.616 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:50.616 traddr: 10.0.0.2 00:10:50.616 eflags: none 00:10:50.616 sectype: none 00:10:50.616 =====Discovery Log Entry 4====== 00:10:50.616 trtype: tcp 00:10:50.616 adrfam: ipv4 00:10:50.616 subtype: nvme subsystem 00:10:50.616 treq: not required 00:10:50.616 portid: 0 00:10:50.616 trsvcid: 4420 00:10:50.616 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:50.616 traddr: 10.0.0.2 00:10:50.616 eflags: none 00:10:50.616 sectype: none 00:10:50.616 =====Discovery Log Entry 5====== 00:10:50.616 trtype: tcp 00:10:50.616 adrfam: ipv4 00:10:50.616 subtype: discovery subsystem referral 00:10:50.616 treq: not required 00:10:50.616 portid: 0 00:10:50.616 trsvcid: 4430 00:10:50.616 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:50.616 traddr: 10.0.0.2 00:10:50.616 eflags: none 00:10:50.616 sectype: none 00:10:50.617 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:50.617 Perform nvmf subsystem discovery via RPC 00:10:50.617 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:50.617 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.617 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.617 [ 00:10:50.617 { 00:10:50.617 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:50.617 "subtype": "Discovery", 00:10:50.617 "listen_addresses": [ 00:10:50.617 { 00:10:50.617 "trtype": "TCP", 00:10:50.617 "adrfam": "IPv4", 00:10:50.617 "traddr": "10.0.0.2", 00:10:50.617 "trsvcid": "4420" 00:10:50.617 } 00:10:50.617 ], 00:10:50.617 "allow_any_host": true, 00:10:50.617 "hosts": [] 00:10:50.617 }, 00:10:50.617 { 00:10:50.617 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.617 "subtype": "NVMe", 00:10:50.617 "listen_addresses": [ 00:10:50.617 { 00:10:50.617 "trtype": "TCP", 00:10:50.617 "adrfam": "IPv4", 00:10:50.617 "traddr": "10.0.0.2", 00:10:50.617 "trsvcid": "4420" 00:10:50.617 } 00:10:50.617 ], 00:10:50.617 "allow_any_host": true, 00:10:50.617 "hosts": [], 00:10:50.617 "serial_number": "SPDK00000000000001", 00:10:50.617 "model_number": "SPDK bdev Controller", 00:10:50.617 "max_namespaces": 32, 00:10:50.617 "min_cntlid": 1, 00:10:50.617 "max_cntlid": 65519, 00:10:50.617 "namespaces": [ 00:10:50.617 { 00:10:50.617 "nsid": 1, 00:10:50.617 "bdev_name": "Null1", 00:10:50.617 "name": "Null1", 00:10:50.617 "nguid": "2BE8E9215F2145DBAFB7EE53106BDC52", 00:10:50.617 "uuid": "2be8e921-5f21-45db-afb7-ee53106bdc52" 00:10:50.617 } 00:10:50.617 ] 00:10:50.617 }, 00:10:50.617 { 00:10:50.617 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:50.617 "subtype": "NVMe", 00:10:50.617 "listen_addresses": [ 00:10:50.617 { 00:10:50.617 "trtype": "TCP", 00:10:50.617 "adrfam": "IPv4", 00:10:50.617 "traddr": "10.0.0.2", 00:10:50.617 "trsvcid": "4420" 00:10:50.617 } 00:10:50.617 ], 00:10:50.617 "allow_any_host": true, 00:10:50.617 "hosts": [], 00:10:50.617 "serial_number": "SPDK00000000000002", 00:10:50.617 "model_number": "SPDK bdev Controller", 00:10:50.617 "max_namespaces": 32, 00:10:50.617 "min_cntlid": 1, 00:10:50.617 "max_cntlid": 65519, 00:10:50.617 "namespaces": [ 00:10:50.617 { 00:10:50.617 "nsid": 1, 00:10:50.617 "bdev_name": "Null2", 00:10:50.617 "name": "Null2", 00:10:50.617 "nguid": "1E77B1B53CAD479A919CAFA18C72D3CE", 00:10:50.617 "uuid": "1e77b1b5-3cad-479a-919c-afa18c72d3ce" 00:10:50.617 } 00:10:50.617 ] 00:10:50.617 }, 00:10:50.617 { 00:10:50.617 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:50.617 "subtype": "NVMe", 00:10:50.617 "listen_addresses": [ 00:10:50.617 { 00:10:50.617 "trtype": "TCP", 00:10:50.617 "adrfam": "IPv4", 00:10:50.617 "traddr": "10.0.0.2", 00:10:50.617 "trsvcid": "4420" 00:10:50.617 } 00:10:50.617 ], 00:10:50.617 "allow_any_host": true, 00:10:50.617 "hosts": [], 00:10:50.617 "serial_number": "SPDK00000000000003", 00:10:50.617 "model_number": "SPDK bdev Controller", 00:10:50.617 "max_namespaces": 32, 00:10:50.617 "min_cntlid": 1, 00:10:50.617 "max_cntlid": 65519, 00:10:50.617 "namespaces": [ 00:10:50.617 { 00:10:50.617 "nsid": 1, 00:10:50.617 "bdev_name": "Null3", 00:10:50.617 "name": "Null3", 00:10:50.617 "nguid": "8618E4FF1CD44D249FF284F82441DF95", 00:10:50.617 "uuid": "8618e4ff-1cd4-4d24-9ff2-84f82441df95" 00:10:50.617 } 00:10:50.617 ] 00:10:50.617 }, 00:10:50.617 { 00:10:50.617 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:50.617 "subtype": "NVMe", 00:10:50.617 "listen_addresses": [ 00:10:50.617 { 00:10:50.617 "trtype": "TCP", 00:10:50.617 "adrfam": "IPv4", 00:10:50.617 "traddr": "10.0.0.2", 00:10:50.617 "trsvcid": "4420" 00:10:50.617 } 00:10:50.617 ], 00:10:50.617 "allow_any_host": true, 00:10:50.617 "hosts": [], 00:10:50.617 "serial_number": "SPDK00000000000004", 00:10:50.617 "model_number": "SPDK bdev Controller", 00:10:50.617 "max_namespaces": 32, 00:10:50.617 "min_cntlid": 1, 00:10:50.617 "max_cntlid": 65519, 00:10:50.617 "namespaces": [ 00:10:50.617 { 00:10:50.617 "nsid": 1, 00:10:50.617 "bdev_name": "Null4", 00:10:50.617 "name": "Null4", 00:10:50.617 "nguid": "1AE49174B2A34059AE6C78BCF95694CA", 00:10:50.617 "uuid": "1ae49174-b2a3-4059-ae6c-78bcf95694ca" 00:10:50.617 } 00:10:50.617 ] 00:10:50.617 } 00:10:50.617 ] 00:10:50.617 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.876 rmmod nvme_tcp 00:10:50.876 rmmod nvme_fabrics 00:10:50.876 rmmod nvme_keyring 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2649557 ']' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2649557 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2649557 ']' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2649557 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649557 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649557' 00:10:50.876 killing process with pid 2649557 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2649557 00:10:50.876 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2649557 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.136 16:03:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.675 00:10:53.675 real 0m10.031s 00:10:53.675 user 0m8.345s 00:10:53.675 sys 0m4.895s 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.675 ************************************ 00:10:53.675 END TEST nvmf_target_discovery 00:10:53.675 ************************************ 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.675 ************************************ 00:10:53.675 START TEST nvmf_referrals 00:10:53.675 ************************************ 00:10:53.675 16:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.675 * Looking for test storage... 00:10:53.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.675 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.676 --rc genhtml_branch_coverage=1 00:10:53.676 --rc genhtml_function_coverage=1 00:10:53.676 --rc genhtml_legend=1 00:10:53.676 --rc geninfo_all_blocks=1 00:10:53.676 --rc geninfo_unexecuted_blocks=1 00:10:53.676 00:10:53.676 ' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.676 --rc genhtml_branch_coverage=1 00:10:53.676 --rc genhtml_function_coverage=1 00:10:53.676 --rc genhtml_legend=1 00:10:53.676 --rc geninfo_all_blocks=1 00:10:53.676 --rc geninfo_unexecuted_blocks=1 00:10:53.676 00:10:53.676 ' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.676 --rc genhtml_branch_coverage=1 00:10:53.676 --rc genhtml_function_coverage=1 00:10:53.676 --rc genhtml_legend=1 00:10:53.676 --rc geninfo_all_blocks=1 00:10:53.676 --rc geninfo_unexecuted_blocks=1 00:10:53.676 00:10:53.676 ' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.676 --rc genhtml_branch_coverage=1 00:10:53.676 --rc genhtml_function_coverage=1 00:10:53.676 --rc genhtml_legend=1 00:10:53.676 --rc geninfo_all_blocks=1 00:10:53.676 --rc geninfo_unexecuted_blocks=1 00:10:53.676 00:10:53.676 ' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.676 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.677 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.677 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.677 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.677 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.677 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.677 16:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:00.251 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:00.251 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:00.251 Found net devices under 0000:86:00.0: cvl_0_0 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:00.251 Found net devices under 0000:86:00.1: cvl_0_1 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.251 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.252 16:03:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:11:00.252 00:11:00.252 --- 10.0.0.2 ping statistics --- 00:11:00.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.252 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:11:00.252 00:11:00.252 --- 10.0.0.1 ping statistics --- 00:11:00.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.252 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2653351 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2653351 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2653351 ']' 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.252 16:04:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.252 [2024-11-20 16:04:00.275955] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:11:00.252 [2024-11-20 16:04:00.276004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.252 [2024-11-20 16:04:00.358172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.252 [2024-11-20 16:04:00.402109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.252 [2024-11-20 16:04:00.402146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.252 [2024-11-20 16:04:00.402153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.252 [2024-11-20 16:04:00.402159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.252 [2024-11-20 16:04:00.402165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.252 [2024-11-20 16:04:00.403803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.252 [2024-11-20 16:04:00.403909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.252 [2024-11-20 16:04:00.403932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.252 [2024-11-20 16:04:00.403933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 [2024-11-20 16:04:01.165330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 [2024-11-20 16:04:01.188086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:00.511 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:00.807 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:01.201 16:04:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:01.498 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:01.797 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:02.071 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.369 16:04:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.369 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.626 rmmod nvme_tcp 00:11:02.626 rmmod nvme_fabrics 00:11:02.626 rmmod nvme_keyring 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2653351 ']' 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2653351 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2653351 ']' 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2653351 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653351 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653351' 00:11:02.626 killing process with pid 2653351 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2653351 00:11:02.626 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2653351 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.885 16:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.786 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.786 00:11:04.786 real 0m11.600s 00:11:04.786 user 0m15.183s 00:11:04.786 sys 0m5.322s 00:11:04.786 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.786 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.786 ************************************ 00:11:04.786 END TEST nvmf_referrals 00:11:04.786 ************************************ 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.046 ************************************ 00:11:05.046 START TEST nvmf_connect_disconnect 00:11:05.046 ************************************ 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:05.046 * Looking for test storage... 00:11:05.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.046 --rc genhtml_branch_coverage=1 00:11:05.046 --rc genhtml_function_coverage=1 00:11:05.046 --rc genhtml_legend=1 00:11:05.046 --rc geninfo_all_blocks=1 00:11:05.046 --rc geninfo_unexecuted_blocks=1 00:11:05.046 00:11:05.046 ' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.046 --rc genhtml_branch_coverage=1 00:11:05.046 --rc genhtml_function_coverage=1 00:11:05.046 --rc genhtml_legend=1 00:11:05.046 --rc geninfo_all_blocks=1 00:11:05.046 --rc geninfo_unexecuted_blocks=1 00:11:05.046 00:11:05.046 ' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.046 --rc genhtml_branch_coverage=1 00:11:05.046 --rc genhtml_function_coverage=1 00:11:05.046 --rc genhtml_legend=1 00:11:05.046 --rc geninfo_all_blocks=1 00:11:05.046 --rc geninfo_unexecuted_blocks=1 00:11:05.046 00:11:05.046 ' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.046 --rc genhtml_branch_coverage=1 00:11:05.046 --rc genhtml_function_coverage=1 00:11:05.046 --rc genhtml_legend=1 00:11:05.046 --rc geninfo_all_blocks=1 00:11:05.046 --rc geninfo_unexecuted_blocks=1 00:11:05.046 00:11:05.046 ' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.046 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.047 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.306 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.306 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.306 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.306 16:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:11.875 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:11.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:11.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:11.876 Found net devices under 0000:86:00.0: cvl_0_0 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:11.876 Found net devices under 0000:86:00.1: cvl_0_1 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:11:11.876 00:11:11.876 --- 10.0.0.2 ping statistics --- 00:11:11.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.876 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:11:11.876 00:11:11.876 --- 10.0.0.1 ping statistics --- 00:11:11.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.876 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.876 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2657630 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2657630 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2657630 ']' 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.877 16:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:11.877 [2024-11-20 16:04:11.886642] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:11:11.877 [2024-11-20 16:04:11.886686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.877 [2024-11-20 16:04:11.968317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.877 [2024-11-20 16:04:12.012384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.877 [2024-11-20 16:04:12.012422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.877 [2024-11-20 16:04:12.012429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.877 [2024-11-20 16:04:12.012435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.877 [2024-11-20 16:04:12.012440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.877 [2024-11-20 16:04:12.014029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.877 [2024-11-20 16:04:12.014141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.877 [2024-11-20 16:04:12.014169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.877 [2024-11-20 16:04:12.014169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.136 [2024-11-20 16:04:12.767062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:12.136 [2024-11-20 16:04:12.844317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:12.136 16:04:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:15.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.608 rmmod nvme_tcp 00:11:28.608 rmmod nvme_fabrics 00:11:28.608 rmmod nvme_keyring 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2657630 ']' 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2657630 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2657630 ']' 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2657630 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2657630 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2657630' 00:11:28.608 killing process with pid 2657630 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2657630 00:11:28.608 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2657630 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.867 16:04:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.773 00:11:30.773 real 0m25.896s 00:11:30.773 user 1m11.242s 00:11:30.773 sys 0m5.842s 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.773 ************************************ 00:11:30.773 END TEST nvmf_connect_disconnect 00:11:30.773 ************************************ 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.773 16:04:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.033 ************************************ 00:11:31.033 START TEST nvmf_multitarget 00:11:31.033 ************************************ 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:31.033 * Looking for test storage... 00:11:31.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.033 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.033 --rc genhtml_branch_coverage=1 00:11:31.033 --rc genhtml_function_coverage=1 00:11:31.034 --rc genhtml_legend=1 00:11:31.034 --rc geninfo_all_blocks=1 00:11:31.034 --rc geninfo_unexecuted_blocks=1 00:11:31.034 00:11:31.034 ' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.034 --rc genhtml_branch_coverage=1 00:11:31.034 --rc genhtml_function_coverage=1 00:11:31.034 --rc genhtml_legend=1 00:11:31.034 --rc geninfo_all_blocks=1 00:11:31.034 --rc geninfo_unexecuted_blocks=1 00:11:31.034 00:11:31.034 ' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.034 --rc genhtml_branch_coverage=1 00:11:31.034 --rc genhtml_function_coverage=1 00:11:31.034 --rc genhtml_legend=1 00:11:31.034 --rc geninfo_all_blocks=1 00:11:31.034 --rc geninfo_unexecuted_blocks=1 00:11:31.034 00:11:31.034 ' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.034 --rc genhtml_branch_coverage=1 00:11:31.034 --rc genhtml_function_coverage=1 00:11:31.034 --rc genhtml_legend=1 00:11:31.034 --rc geninfo_all_blocks=1 00:11:31.034 --rc geninfo_unexecuted_blocks=1 00:11:31.034 00:11:31.034 ' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.034 16:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:37.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:37.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.605 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:37.606 Found net devices under 0000:86:00.0: cvl_0_0 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:37.606 Found net devices under 0000:86:00.1: cvl_0_1 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:11:37.606 00:11:37.606 --- 10.0.0.2 ping statistics --- 00:11:37.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.606 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:11:37.606 00:11:37.606 --- 10.0.0.1 ping statistics --- 00:11:37.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.606 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2664056 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2664056 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2664056 ']' 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.606 16:04:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:37.606 [2024-11-20 16:04:37.864712] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:11:37.606 [2024-11-20 16:04:37.864767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.606 [2024-11-20 16:04:37.944729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.606 [2024-11-20 16:04:37.988426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.606 [2024-11-20 16:04:37.988464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.606 [2024-11-20 16:04:37.988471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.606 [2024-11-20 16:04:37.988477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.606 [2024-11-20 16:04:37.988482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.606 [2024-11-20 16:04:37.990072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.606 [2024-11-20 16:04:37.990185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.606 [2024-11-20 16:04:37.990292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.606 [2024-11-20 16:04:37.990293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.606 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.606 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:37.606 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.606 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.606 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:37.607 "nvmf_tgt_1" 00:11:37.607 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:37.607 "nvmf_tgt_2" 00:11:37.865 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:37.865 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:37.865 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:37.865 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:37.865 true 00:11:37.865 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:38.123 true 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.123 rmmod nvme_tcp 00:11:38.123 rmmod nvme_fabrics 00:11:38.123 rmmod nvme_keyring 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2664056 ']' 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2664056 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2664056 ']' 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2664056 00:11:38.123 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:38.382 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.382 16:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664056 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664056' 00:11:38.382 killing process with pid 2664056 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2664056 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2664056 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.382 16:04:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.920 00:11:40.920 real 0m9.603s 00:11:40.920 user 0m7.318s 00:11:40.920 sys 0m4.857s 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:40.920 ************************************ 00:11:40.920 END TEST nvmf_multitarget 00:11:40.920 ************************************ 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.920 ************************************ 00:11:40.920 START TEST nvmf_rpc 00:11:40.920 ************************************ 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:40.920 * Looking for test storage... 00:11:40.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.920 --rc genhtml_branch_coverage=1 00:11:40.920 --rc genhtml_function_coverage=1 00:11:40.920 --rc genhtml_legend=1 00:11:40.920 --rc geninfo_all_blocks=1 00:11:40.920 --rc geninfo_unexecuted_blocks=1 00:11:40.920 00:11:40.920 ' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.920 --rc genhtml_branch_coverage=1 00:11:40.920 --rc genhtml_function_coverage=1 00:11:40.920 --rc genhtml_legend=1 00:11:40.920 --rc geninfo_all_blocks=1 00:11:40.920 --rc geninfo_unexecuted_blocks=1 00:11:40.920 00:11:40.920 ' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.920 --rc genhtml_branch_coverage=1 00:11:40.920 --rc genhtml_function_coverage=1 00:11:40.920 --rc genhtml_legend=1 00:11:40.920 --rc geninfo_all_blocks=1 00:11:40.920 --rc geninfo_unexecuted_blocks=1 00:11:40.920 00:11:40.920 ' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.920 --rc genhtml_branch_coverage=1 00:11:40.920 --rc genhtml_function_coverage=1 00:11:40.920 --rc genhtml_legend=1 00:11:40.920 --rc geninfo_all_blocks=1 00:11:40.920 --rc geninfo_unexecuted_blocks=1 00:11:40.920 00:11:40.920 ' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.920 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.921 16:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.491 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:47.492 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:47.492 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:47.492 Found net devices under 0000:86:00.0: cvl_0_0 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:47.492 Found net devices under 0000:86:00.1: cvl_0_1 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:11:47.492 00:11:47.492 --- 10.0.0.2 ping statistics --- 00:11:47.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.492 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:47.492 00:11:47.492 --- 10.0.0.1 ping statistics --- 00:11:47.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.492 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2667840 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2667840 00:11:47.492 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2667840 ']' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 [2024-11-20 16:04:47.591556] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:11:47.493 [2024-11-20 16:04:47.591606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.493 [2024-11-20 16:04:47.656564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.493 [2024-11-20 16:04:47.700451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.493 [2024-11-20 16:04:47.700489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.493 [2024-11-20 16:04:47.700496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.493 [2024-11-20 16:04:47.700502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.493 [2024-11-20 16:04:47.700507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.493 [2024-11-20 16:04:47.703967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.493 [2024-11-20 16:04:47.704008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.493 [2024-11-20 16:04:47.704112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.493 [2024-11-20 16:04:47.704112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:47.493 "tick_rate": 2300000000, 00:11:47.493 "poll_groups": [ 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_000", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [] 00:11:47.493 }, 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_001", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [] 00:11:47.493 }, 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_002", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [] 00:11:47.493 }, 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_003", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [] 00:11:47.493 } 00:11:47.493 ] 00:11:47.493 }' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 [2024-11-20 16:04:47.962350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:47.493 "tick_rate": 2300000000, 00:11:47.493 "poll_groups": [ 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_000", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [ 00:11:47.493 { 00:11:47.493 "trtype": "TCP" 00:11:47.493 } 00:11:47.493 ] 00:11:47.493 }, 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_001", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [ 00:11:47.493 { 00:11:47.493 "trtype": "TCP" 00:11:47.493 } 00:11:47.493 ] 00:11:47.493 }, 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_002", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [ 00:11:47.493 { 00:11:47.493 "trtype": "TCP" 00:11:47.493 } 00:11:47.493 ] 00:11:47.493 }, 00:11:47.493 { 00:11:47.493 "name": "nvmf_tgt_poll_group_003", 00:11:47.493 "admin_qpairs": 0, 00:11:47.493 "io_qpairs": 0, 00:11:47.493 "current_admin_qpairs": 0, 00:11:47.493 "current_io_qpairs": 0, 00:11:47.493 "pending_bdev_io": 0, 00:11:47.493 "completed_nvme_io": 0, 00:11:47.493 "transports": [ 00:11:47.493 { 00:11:47.493 "trtype": "TCP" 00:11:47.493 } 00:11:47.493 ] 00:11:47.493 } 00:11:47.493 ] 00:11:47.493 }' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:47.493 16:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 Malloc1 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.493 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 [2024-11-20 16:04:48.144328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:47.494 [2024-11-20 16:04:48.173040] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:47.494 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:47.494 could not add new controller: failed to write to nvme-fabrics device 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.494 16:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.867 16:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.867 16:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:48.867 16:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.867 16:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:48.867 16:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.767 [2024-11-20 16:04:51.546151] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:50.767 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:50.767 could not add new controller: failed to write to nvme-fabrics device 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.768 16:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.207 16:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.207 16:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.207 16:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.207 16:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.207 16:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.104 [2024-11-20 16:04:54.857442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.104 16:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.474 16:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.474 16:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.474 16:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.474 16:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:55.474 16:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:57.371 16:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 [2024-11-20 16:04:58.162452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.371 16:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.741 16:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.741 16:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.741 16:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.741 16:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.741 16:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.640 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 [2024-11-20 16:05:01.472223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.898 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.899 16:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.270 16:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.270 16:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.270 16:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.270 16:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:02.270 16:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.170 [2024-11-20 16:05:04.814410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.170 16:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.545 16:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.545 16:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:05.545 16:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.545 16:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:05.545 16:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.455 [2024-11-20 16:05:08.203068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.455 16:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.832 16:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.832 16:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:08.832 16:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.832 16:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:08.832 16:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.734 [2024-11-20 16:05:11.560674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.734 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 [2024-11-20 16:05:11.608770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 [2024-11-20 16:05:11.656910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.994 [2024-11-20 16:05:11.705074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.994 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 [2024-11-20 16:05:11.753240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:10.995 "tick_rate": 2300000000, 00:12:10.995 "poll_groups": [ 00:12:10.995 { 00:12:10.995 "name": "nvmf_tgt_poll_group_000", 00:12:10.995 "admin_qpairs": 2, 00:12:10.995 "io_qpairs": 168, 00:12:10.995 "current_admin_qpairs": 0, 00:12:10.995 "current_io_qpairs": 0, 00:12:10.995 "pending_bdev_io": 0, 00:12:10.995 "completed_nvme_io": 296, 00:12:10.995 "transports": [ 00:12:10.995 { 00:12:10.995 "trtype": "TCP" 00:12:10.995 } 00:12:10.995 ] 00:12:10.995 }, 00:12:10.995 { 00:12:10.995 "name": "nvmf_tgt_poll_group_001", 00:12:10.995 "admin_qpairs": 2, 00:12:10.995 "io_qpairs": 168, 00:12:10.995 "current_admin_qpairs": 0, 00:12:10.995 "current_io_qpairs": 0, 00:12:10.995 "pending_bdev_io": 0, 00:12:10.995 "completed_nvme_io": 219, 00:12:10.995 "transports": [ 00:12:10.995 { 00:12:10.995 "trtype": "TCP" 00:12:10.995 } 00:12:10.995 ] 00:12:10.995 }, 00:12:10.995 { 00:12:10.995 "name": "nvmf_tgt_poll_group_002", 00:12:10.995 "admin_qpairs": 1, 00:12:10.995 "io_qpairs": 168, 00:12:10.995 "current_admin_qpairs": 0, 00:12:10.995 "current_io_qpairs": 0, 00:12:10.995 "pending_bdev_io": 0, 00:12:10.995 "completed_nvme_io": 267, 00:12:10.995 "transports": [ 00:12:10.995 { 00:12:10.995 "trtype": "TCP" 00:12:10.995 } 00:12:10.995 ] 00:12:10.995 }, 00:12:10.995 { 00:12:10.995 "name": "nvmf_tgt_poll_group_003", 00:12:10.995 "admin_qpairs": 2, 00:12:10.995 "io_qpairs": 168, 00:12:10.995 "current_admin_qpairs": 0, 00:12:10.995 "current_io_qpairs": 0, 00:12:10.995 "pending_bdev_io": 0, 00:12:10.995 "completed_nvme_io": 240, 00:12:10.995 "transports": [ 00:12:10.995 { 00:12:10.995 "trtype": "TCP" 00:12:10.995 } 00:12:10.995 ] 00:12:10.995 } 00:12:10.995 ] 00:12:10.995 }' 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:10.995 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:11.254 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.255 rmmod nvme_tcp 00:12:11.255 rmmod nvme_fabrics 00:12:11.255 rmmod nvme_keyring 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2667840 ']' 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2667840 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2667840 ']' 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2667840 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.255 16:05:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667840 00:12:11.255 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.255 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.255 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667840' 00:12:11.255 killing process with pid 2667840 00:12:11.255 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2667840 00:12:11.255 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2667840 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.514 16:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.050 00:12:14.050 real 0m32.952s 00:12:14.050 user 1m39.331s 00:12:14.050 sys 0m6.542s 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.050 ************************************ 00:12:14.050 END TEST nvmf_rpc 00:12:14.050 ************************************ 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.050 ************************************ 00:12:14.050 START TEST nvmf_invalid 00:12:14.050 ************************************ 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:14.050 * Looking for test storage... 00:12:14.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.050 --rc genhtml_branch_coverage=1 00:12:14.050 --rc genhtml_function_coverage=1 00:12:14.050 --rc genhtml_legend=1 00:12:14.050 --rc geninfo_all_blocks=1 00:12:14.050 --rc geninfo_unexecuted_blocks=1 00:12:14.050 00:12:14.050 ' 00:12:14.050 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.050 --rc genhtml_branch_coverage=1 00:12:14.050 --rc genhtml_function_coverage=1 00:12:14.050 --rc genhtml_legend=1 00:12:14.050 --rc geninfo_all_blocks=1 00:12:14.051 --rc geninfo_unexecuted_blocks=1 00:12:14.051 00:12:14.051 ' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.051 --rc genhtml_branch_coverage=1 00:12:14.051 --rc genhtml_function_coverage=1 00:12:14.051 --rc genhtml_legend=1 00:12:14.051 --rc geninfo_all_blocks=1 00:12:14.051 --rc geninfo_unexecuted_blocks=1 00:12:14.051 00:12:14.051 ' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.051 --rc genhtml_branch_coverage=1 00:12:14.051 --rc genhtml_function_coverage=1 00:12:14.051 --rc genhtml_legend=1 00:12:14.051 --rc geninfo_all_blocks=1 00:12:14.051 --rc geninfo_unexecuted_blocks=1 00:12:14.051 00:12:14.051 ' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.051 16:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:20.620 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:20.620 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:20.620 Found net devices under 0000:86:00.0: cvl_0_0 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:20.620 Found net devices under 0000:86:00.1: cvl_0_1 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.620 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:12:20.621 00:12:20.621 --- 10.0.0.2 ping statistics --- 00:12:20.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.621 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:20.621 00:12:20.621 --- 10.0.0.1 ping statistics --- 00:12:20.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.621 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2675449 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2675449 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2675449 ']' 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.621 [2024-11-20 16:05:20.564558] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:12:20.621 [2024-11-20 16:05:20.564607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.621 [2024-11-20 16:05:20.647742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.621 [2024-11-20 16:05:20.690637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.621 [2024-11-20 16:05:20.690674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.621 [2024-11-20 16:05:20.690682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.621 [2024-11-20 16:05:20.690687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.621 [2024-11-20 16:05:20.690693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.621 [2024-11-20 16:05:20.692154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.621 [2024-11-20 16:05:20.692267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.621 [2024-11-20 16:05:20.692372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.621 [2024-11-20 16:05:20.692373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:20.621 16:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24269 00:12:20.621 [2024-11-20 16:05:21.016147] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:20.621 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:20.621 { 00:12:20.621 "nqn": "nqn.2016-06.io.spdk:cnode24269", 00:12:20.621 "tgt_name": "foobar", 00:12:20.621 "method": "nvmf_create_subsystem", 00:12:20.621 "req_id": 1 00:12:20.621 } 00:12:20.621 Got JSON-RPC error response 00:12:20.621 response: 00:12:20.621 { 00:12:20.621 "code": -32603, 00:12:20.621 "message": "Unable to find target foobar" 00:12:20.621 }' 00:12:20.621 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:20.621 { 00:12:20.621 "nqn": "nqn.2016-06.io.spdk:cnode24269", 00:12:20.621 "tgt_name": "foobar", 00:12:20.621 "method": "nvmf_create_subsystem", 00:12:20.621 "req_id": 1 00:12:20.621 } 00:12:20.621 Got JSON-RPC error response 00:12:20.621 response: 00:12:20.621 { 00:12:20.621 "code": -32603, 00:12:20.621 "message": "Unable to find target foobar" 00:12:20.621 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:20.621 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:20.621 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29805 00:12:20.621 [2024-11-20 16:05:21.236934] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29805: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:20.621 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:20.621 { 00:12:20.621 "nqn": "nqn.2016-06.io.spdk:cnode29805", 00:12:20.621 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:20.621 "method": "nvmf_create_subsystem", 00:12:20.621 "req_id": 1 00:12:20.621 } 00:12:20.621 Got JSON-RPC error response 00:12:20.621 response: 00:12:20.621 { 00:12:20.621 "code": -32602, 00:12:20.621 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:20.621 }' 00:12:20.621 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:20.621 { 00:12:20.622 "nqn": "nqn.2016-06.io.spdk:cnode29805", 00:12:20.622 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:20.622 "method": "nvmf_create_subsystem", 00:12:20.622 "req_id": 1 00:12:20.622 } 00:12:20.622 Got JSON-RPC error response 00:12:20.622 response: 00:12:20.622 { 00:12:20.622 "code": -32602, 00:12:20.622 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:20.622 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:20.622 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:20.622 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12501 00:12:20.622 [2024-11-20 16:05:21.441614] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12501: invalid model number 'SPDK_Controller' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:20.882 { 00:12:20.882 "nqn": "nqn.2016-06.io.spdk:cnode12501", 00:12:20.882 "model_number": "SPDK_Controller\u001f", 00:12:20.882 "method": "nvmf_create_subsystem", 00:12:20.882 "req_id": 1 00:12:20.882 } 00:12:20.882 Got JSON-RPC error response 00:12:20.882 response: 00:12:20.882 { 00:12:20.882 "code": -32602, 00:12:20.882 "message": "Invalid MN SPDK_Controller\u001f" 00:12:20.882 }' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:20.882 { 00:12:20.882 "nqn": "nqn.2016-06.io.spdk:cnode12501", 00:12:20.882 "model_number": "SPDK_Controller\u001f", 00:12:20.882 "method": "nvmf_create_subsystem", 00:12:20.882 "req_id": 1 00:12:20.882 } 00:12:20.882 Got JSON-RPC error response 00:12:20.882 response: 00:12:20.882 { 00:12:20.882 "code": -32602, 00:12:20.882 "message": "Invalid MN SPDK_Controller\u001f" 00:12:20.882 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.882 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '?HH.lBVFJ498?kl"P5;6_' 00:12:20.883 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '?HH.lBVFJ498?kl"P5;6_' nqn.2016-06.io.spdk:cnode23857 00:12:21.143 [2024-11-20 16:05:21.778759] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23857: invalid serial number '?HH.lBVFJ498?kl"P5;6_' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:21.143 { 00:12:21.143 "nqn": "nqn.2016-06.io.spdk:cnode23857", 00:12:21.143 "serial_number": "?HH.lBVFJ498?kl\"P5;6_", 00:12:21.143 "method": "nvmf_create_subsystem", 00:12:21.143 "req_id": 1 00:12:21.143 } 00:12:21.143 Got JSON-RPC error response 00:12:21.143 response: 00:12:21.143 { 00:12:21.143 "code": -32602, 00:12:21.143 "message": "Invalid SN ?HH.lBVFJ498?kl\"P5;6_" 00:12:21.143 }' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:21.143 { 00:12:21.143 "nqn": "nqn.2016-06.io.spdk:cnode23857", 00:12:21.143 "serial_number": "?HH.lBVFJ498?kl\"P5;6_", 00:12:21.143 "method": "nvmf_create_subsystem", 00:12:21.143 "req_id": 1 00:12:21.143 } 00:12:21.143 Got JSON-RPC error response 00:12:21.143 response: 00:12:21.143 { 00:12:21.143 "code": -32602, 00:12:21.143 "message": "Invalid SN ?HH.lBVFJ498?kl\"P5;6_" 00:12:21.143 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.143 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.144 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:21.403 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:21.403 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:21.403 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.403 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:12:21.404 16:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+W@ugS4d39D`?xQ|[=3c>ApR/7t,,8$>kApR/7t,,8$>kApR/7t,,8$>kApR/7t,,8$>kApR/7t,,8$>kApR/7t,,8$>k /dev/null' 00:12:23.731 16:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.635 00:12:25.635 real 0m12.046s 00:12:25.635 user 0m18.760s 00:12:25.635 sys 0m5.425s 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.635 ************************************ 00:12:25.635 END TEST nvmf_invalid 00:12:25.635 ************************************ 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.635 16:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.895 ************************************ 00:12:25.895 START TEST nvmf_connect_stress 00:12:25.895 ************************************ 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:25.895 * Looking for test storage... 00:12:25.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.895 --rc genhtml_branch_coverage=1 00:12:25.895 --rc genhtml_function_coverage=1 00:12:25.895 --rc genhtml_legend=1 00:12:25.895 --rc geninfo_all_blocks=1 00:12:25.895 --rc geninfo_unexecuted_blocks=1 00:12:25.895 00:12:25.895 ' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.895 --rc genhtml_branch_coverage=1 00:12:25.895 --rc genhtml_function_coverage=1 00:12:25.895 --rc genhtml_legend=1 00:12:25.895 --rc geninfo_all_blocks=1 00:12:25.895 --rc geninfo_unexecuted_blocks=1 00:12:25.895 00:12:25.895 ' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.895 --rc genhtml_branch_coverage=1 00:12:25.895 --rc genhtml_function_coverage=1 00:12:25.895 --rc genhtml_legend=1 00:12:25.895 --rc geninfo_all_blocks=1 00:12:25.895 --rc geninfo_unexecuted_blocks=1 00:12:25.895 00:12:25.895 ' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.895 --rc genhtml_branch_coverage=1 00:12:25.895 --rc genhtml_function_coverage=1 00:12:25.895 --rc genhtml_legend=1 00:12:25.895 --rc geninfo_all_blocks=1 00:12:25.895 --rc geninfo_unexecuted_blocks=1 00:12:25.895 00:12:25.895 ' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.895 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.896 16:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:32.469 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:32.469 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:32.469 Found net devices under 0000:86:00.0: cvl_0_0 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:32.469 Found net devices under 0000:86:00.1: cvl_0_1 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.469 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:12:32.470 00:12:32.470 --- 10.0.0.2 ping statistics --- 00:12:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.470 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:12:32.470 00:12:32.470 --- 10.0.0.1 ping statistics --- 00:12:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.470 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2679832 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2679832 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2679832 ']' 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 [2024-11-20 16:05:32.656244] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:12:32.470 [2024-11-20 16:05:32.656289] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.470 [2024-11-20 16:05:32.733224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.470 [2024-11-20 16:05:32.773103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.470 [2024-11-20 16:05:32.773138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.470 [2024-11-20 16:05:32.773148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.470 [2024-11-20 16:05:32.773154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.470 [2024-11-20 16:05:32.773176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.470 [2024-11-20 16:05:32.774516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.470 [2024-11-20 16:05:32.774601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.470 [2024-11-20 16:05:32.774602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 [2024-11-20 16:05:32.920508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 [2024-11-20 16:05:32.940769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.470 NULL1 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2679853 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.470 16:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.471 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.729 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.729 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:32.729 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.729 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.729 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.987 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.987 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:32.987 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.987 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.987 16:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.246 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.246 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:33.246 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.246 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.246 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.812 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:33.812 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.812 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.812 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.070 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.070 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:34.070 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.070 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.070 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.328 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.328 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:34.328 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.328 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.328 16:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.587 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.587 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:34.587 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.587 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.587 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.846 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.846 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:34.846 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.846 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.846 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.413 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.413 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:35.413 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.413 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.413 16:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.671 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.671 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:35.671 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.671 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.671 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.929 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.929 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:35.929 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.929 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.929 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.199 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.199 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:36.199 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.199 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.199 16:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.464 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.464 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:36.464 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.464 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.464 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.031 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.031 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:37.031 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.031 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.031 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.290 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.290 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:37.290 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.290 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.290 16:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.548 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.548 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:37.548 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.548 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.548 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.806 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.806 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:37.806 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.806 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.806 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.065 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.065 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:38.065 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.065 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.065 16:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.630 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.630 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:38.630 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.630 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.630 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.887 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.887 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:38.887 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.887 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.887 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.146 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:39.146 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.146 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.146 16:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.404 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.404 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:39.404 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.404 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.404 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.970 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.970 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:39.970 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.970 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.970 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.244 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.244 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:40.244 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.244 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.244 16:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.502 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.502 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:40.502 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.502 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.502 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.761 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:40.761 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.761 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.761 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.019 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.019 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:41.019 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.019 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.019 16:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.586 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.586 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:41.586 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.586 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.586 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:41.845 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.845 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.104 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.104 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:42.104 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.104 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.104 16:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.362 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2679853 00:12:42.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2679853) - No such process 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2679853 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:42.362 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.363 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:42.363 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.363 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.363 rmmod nvme_tcp 00:12:42.363 rmmod nvme_fabrics 00:12:42.363 rmmod nvme_keyring 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2679832 ']' 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2679832 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2679832 ']' 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2679832 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2679832 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2679832' 00:12:42.620 killing process with pid 2679832 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2679832 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2679832 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.620 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.621 16:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.156 00:12:45.156 real 0m19.026s 00:12:45.156 user 0m39.653s 00:12:45.156 sys 0m8.477s 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.156 ************************************ 00:12:45.156 END TEST nvmf_connect_stress 00:12:45.156 ************************************ 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.156 ************************************ 00:12:45.156 START TEST nvmf_fused_ordering 00:12:45.156 ************************************ 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:45.156 * Looking for test storage... 00:12:45.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.156 --rc genhtml_branch_coverage=1 00:12:45.156 --rc genhtml_function_coverage=1 00:12:45.156 --rc genhtml_legend=1 00:12:45.156 --rc geninfo_all_blocks=1 00:12:45.156 --rc geninfo_unexecuted_blocks=1 00:12:45.156 00:12:45.156 ' 00:12:45.156 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.157 --rc genhtml_branch_coverage=1 00:12:45.157 --rc genhtml_function_coverage=1 00:12:45.157 --rc genhtml_legend=1 00:12:45.157 --rc geninfo_all_blocks=1 00:12:45.157 --rc geninfo_unexecuted_blocks=1 00:12:45.157 00:12:45.157 ' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.157 --rc genhtml_branch_coverage=1 00:12:45.157 --rc genhtml_function_coverage=1 00:12:45.157 --rc genhtml_legend=1 00:12:45.157 --rc geninfo_all_blocks=1 00:12:45.157 --rc geninfo_unexecuted_blocks=1 00:12:45.157 00:12:45.157 ' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.157 --rc genhtml_branch_coverage=1 00:12:45.157 --rc genhtml_function_coverage=1 00:12:45.157 --rc genhtml_legend=1 00:12:45.157 --rc geninfo_all_blocks=1 00:12:45.157 --rc geninfo_unexecuted_blocks=1 00:12:45.157 00:12:45.157 ' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.157 16:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:51.852 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:51.852 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.852 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:51.853 Found net devices under 0000:86:00.0: cvl_0_0 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:51.853 Found net devices under 0000:86:00.1: cvl_0_1 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:12:51.853 00:12:51.853 --- 10.0.0.2 ping statistics --- 00:12:51.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.853 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:12:51.853 00:12:51.853 --- 10.0.0.1 ping statistics --- 00:12:51.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.853 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2685072 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2685072 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2685072 ']' 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.853 16:05:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.853 [2024-11-20 16:05:51.862583] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:12:51.853 [2024-11-20 16:05:51.862629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.853 [2024-11-20 16:05:51.940043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.853 [2024-11-20 16:05:51.979117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.853 [2024-11-20 16:05:51.979154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.853 [2024-11-20 16:05:51.979161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.853 [2024-11-20 16:05:51.979168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.853 [2024-11-20 16:05:51.979174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.853 [2024-11-20 16:05:51.979753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.853 [2024-11-20 16:05:52.128378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.853 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.854 [2024-11-20 16:05:52.148586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.854 NULL1 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.854 16:05:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:51.854 [2024-11-20 16:05:52.207339] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:12:51.854 [2024-11-20 16:05:52.207371] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685249 ] 00:12:51.854 Attached to nqn.2016-06.io.spdk:cnode1 00:12:51.854 Namespace ID: 1 size: 1GB 00:12:51.854 fused_ordering(0) 00:12:51.854 fused_ordering(1) 00:12:51.854 fused_ordering(2) 00:12:51.854 fused_ordering(3) 00:12:51.854 fused_ordering(4) 00:12:51.854 fused_ordering(5) 00:12:51.854 fused_ordering(6) 00:12:51.854 fused_ordering(7) 00:12:51.854 fused_ordering(8) 00:12:51.854 fused_ordering(9) 00:12:51.854 fused_ordering(10) 00:12:51.854 fused_ordering(11) 00:12:51.854 fused_ordering(12) 00:12:51.854 fused_ordering(13) 00:12:51.854 fused_ordering(14) 00:12:51.854 fused_ordering(15) 00:12:51.854 fused_ordering(16) 00:12:51.854 fused_ordering(17) 00:12:51.854 fused_ordering(18) 00:12:51.854 fused_ordering(19) 00:12:51.854 fused_ordering(20) 00:12:51.854 fused_ordering(21) 00:12:51.854 fused_ordering(22) 00:12:51.854 fused_ordering(23) 00:12:51.854 fused_ordering(24) 00:12:51.854 fused_ordering(25) 00:12:51.854 fused_ordering(26) 00:12:51.854 fused_ordering(27) 00:12:51.854 fused_ordering(28) 00:12:51.854 fused_ordering(29) 00:12:51.854 fused_ordering(30) 00:12:51.854 fused_ordering(31) 00:12:51.854 fused_ordering(32) 00:12:51.854 fused_ordering(33) 00:12:51.854 fused_ordering(34) 00:12:51.854 fused_ordering(35) 00:12:51.854 fused_ordering(36) 00:12:51.854 fused_ordering(37) 00:12:51.854 fused_ordering(38) 00:12:51.854 fused_ordering(39) 00:12:51.854 fused_ordering(40) 00:12:51.854 fused_ordering(41) 00:12:51.854 fused_ordering(42) 00:12:51.854 fused_ordering(43) 00:12:51.854 fused_ordering(44) 00:12:51.854 fused_ordering(45) 00:12:51.854 fused_ordering(46) 00:12:51.854 fused_ordering(47) 00:12:51.854 fused_ordering(48) 00:12:51.854 fused_ordering(49) 00:12:51.854 fused_ordering(50) 00:12:51.854 fused_ordering(51) 00:12:51.854 fused_ordering(52) 00:12:51.854 fused_ordering(53) 00:12:51.854 fused_ordering(54) 00:12:51.854 fused_ordering(55) 00:12:51.854 fused_ordering(56) 00:12:51.854 fused_ordering(57) 00:12:51.854 fused_ordering(58) 00:12:51.854 fused_ordering(59) 00:12:51.854 fused_ordering(60) 00:12:51.854 fused_ordering(61) 00:12:51.854 fused_ordering(62) 00:12:51.854 fused_ordering(63) 00:12:51.854 fused_ordering(64) 00:12:51.854 fused_ordering(65) 00:12:51.854 fused_ordering(66) 00:12:51.854 fused_ordering(67) 00:12:51.854 fused_ordering(68) 00:12:51.854 fused_ordering(69) 00:12:51.854 fused_ordering(70) 00:12:51.854 fused_ordering(71) 00:12:51.854 fused_ordering(72) 00:12:51.854 fused_ordering(73) 00:12:51.854 fused_ordering(74) 00:12:51.854 fused_ordering(75) 00:12:51.854 fused_ordering(76) 00:12:51.854 fused_ordering(77) 00:12:51.854 fused_ordering(78) 00:12:51.854 fused_ordering(79) 00:12:51.854 fused_ordering(80) 00:12:51.854 fused_ordering(81) 00:12:51.854 fused_ordering(82) 00:12:51.854 fused_ordering(83) 00:12:51.854 fused_ordering(84) 00:12:51.854 fused_ordering(85) 00:12:51.854 fused_ordering(86) 00:12:51.854 fused_ordering(87) 00:12:51.854 fused_ordering(88) 00:12:51.854 fused_ordering(89) 00:12:51.854 fused_ordering(90) 00:12:51.854 fused_ordering(91) 00:12:51.854 fused_ordering(92) 00:12:51.854 fused_ordering(93) 00:12:51.854 fused_ordering(94) 00:12:51.854 fused_ordering(95) 00:12:51.854 fused_ordering(96) 00:12:51.854 fused_ordering(97) 00:12:51.854 fused_ordering(98) 00:12:51.854 fused_ordering(99) 00:12:51.854 fused_ordering(100) 00:12:51.854 fused_ordering(101) 00:12:51.854 fused_ordering(102) 00:12:51.854 fused_ordering(103) 00:12:51.854 fused_ordering(104) 00:12:51.854 fused_ordering(105) 00:12:51.854 fused_ordering(106) 00:12:51.854 fused_ordering(107) 00:12:51.854 fused_ordering(108) 00:12:51.854 fused_ordering(109) 00:12:51.854 fused_ordering(110) 00:12:51.854 fused_ordering(111) 00:12:51.854 fused_ordering(112) 00:12:51.854 fused_ordering(113) 00:12:51.854 fused_ordering(114) 00:12:51.854 fused_ordering(115) 00:12:51.854 fused_ordering(116) 00:12:51.854 fused_ordering(117) 00:12:51.854 fused_ordering(118) 00:12:51.854 fused_ordering(119) 00:12:51.854 fused_ordering(120) 00:12:51.854 fused_ordering(121) 00:12:51.854 fused_ordering(122) 00:12:51.854 fused_ordering(123) 00:12:51.854 fused_ordering(124) 00:12:51.854 fused_ordering(125) 00:12:51.854 fused_ordering(126) 00:12:51.854 fused_ordering(127) 00:12:51.854 fused_ordering(128) 00:12:51.854 fused_ordering(129) 00:12:51.854 fused_ordering(130) 00:12:51.854 fused_ordering(131) 00:12:51.854 fused_ordering(132) 00:12:51.854 fused_ordering(133) 00:12:51.854 fused_ordering(134) 00:12:51.854 fused_ordering(135) 00:12:51.854 fused_ordering(136) 00:12:51.854 fused_ordering(137) 00:12:51.854 fused_ordering(138) 00:12:51.854 fused_ordering(139) 00:12:51.854 fused_ordering(140) 00:12:51.854 fused_ordering(141) 00:12:51.854 fused_ordering(142) 00:12:51.854 fused_ordering(143) 00:12:51.854 fused_ordering(144) 00:12:51.854 fused_ordering(145) 00:12:51.854 fused_ordering(146) 00:12:51.854 fused_ordering(147) 00:12:51.854 fused_ordering(148) 00:12:51.854 fused_ordering(149) 00:12:51.854 fused_ordering(150) 00:12:51.854 fused_ordering(151) 00:12:51.854 fused_ordering(152) 00:12:51.854 fused_ordering(153) 00:12:51.854 fused_ordering(154) 00:12:51.854 fused_ordering(155) 00:12:51.854 fused_ordering(156) 00:12:51.854 fused_ordering(157) 00:12:51.854 fused_ordering(158) 00:12:51.854 fused_ordering(159) 00:12:51.854 fused_ordering(160) 00:12:51.854 fused_ordering(161) 00:12:51.854 fused_ordering(162) 00:12:51.854 fused_ordering(163) 00:12:51.854 fused_ordering(164) 00:12:51.854 fused_ordering(165) 00:12:51.854 fused_ordering(166) 00:12:51.854 fused_ordering(167) 00:12:51.854 fused_ordering(168) 00:12:51.854 fused_ordering(169) 00:12:51.854 fused_ordering(170) 00:12:51.854 fused_ordering(171) 00:12:51.854 fused_ordering(172) 00:12:51.854 fused_ordering(173) 00:12:51.854 fused_ordering(174) 00:12:51.854 fused_ordering(175) 00:12:51.854 fused_ordering(176) 00:12:51.854 fused_ordering(177) 00:12:51.854 fused_ordering(178) 00:12:51.854 fused_ordering(179) 00:12:51.854 fused_ordering(180) 00:12:51.854 fused_ordering(181) 00:12:51.854 fused_ordering(182) 00:12:51.854 fused_ordering(183) 00:12:51.854 fused_ordering(184) 00:12:51.854 fused_ordering(185) 00:12:51.854 fused_ordering(186) 00:12:51.854 fused_ordering(187) 00:12:51.854 fused_ordering(188) 00:12:51.854 fused_ordering(189) 00:12:51.854 fused_ordering(190) 00:12:51.854 fused_ordering(191) 00:12:51.854 fused_ordering(192) 00:12:51.854 fused_ordering(193) 00:12:51.854 fused_ordering(194) 00:12:51.854 fused_ordering(195) 00:12:51.854 fused_ordering(196) 00:12:51.854 fused_ordering(197) 00:12:51.854 fused_ordering(198) 00:12:51.854 fused_ordering(199) 00:12:51.854 fused_ordering(200) 00:12:51.854 fused_ordering(201) 00:12:51.854 fused_ordering(202) 00:12:51.854 fused_ordering(203) 00:12:51.854 fused_ordering(204) 00:12:51.854 fused_ordering(205) 00:12:52.114 fused_ordering(206) 00:12:52.114 fused_ordering(207) 00:12:52.114 fused_ordering(208) 00:12:52.114 fused_ordering(209) 00:12:52.114 fused_ordering(210) 00:12:52.114 fused_ordering(211) 00:12:52.114 fused_ordering(212) 00:12:52.114 fused_ordering(213) 00:12:52.114 fused_ordering(214) 00:12:52.114 fused_ordering(215) 00:12:52.114 fused_ordering(216) 00:12:52.114 fused_ordering(217) 00:12:52.114 fused_ordering(218) 00:12:52.114 fused_ordering(219) 00:12:52.114 fused_ordering(220) 00:12:52.114 fused_ordering(221) 00:12:52.114 fused_ordering(222) 00:12:52.114 fused_ordering(223) 00:12:52.114 fused_ordering(224) 00:12:52.114 fused_ordering(225) 00:12:52.114 fused_ordering(226) 00:12:52.114 fused_ordering(227) 00:12:52.114 fused_ordering(228) 00:12:52.114 fused_ordering(229) 00:12:52.114 fused_ordering(230) 00:12:52.114 fused_ordering(231) 00:12:52.114 fused_ordering(232) 00:12:52.114 fused_ordering(233) 00:12:52.114 fused_ordering(234) 00:12:52.114 fused_ordering(235) 00:12:52.114 fused_ordering(236) 00:12:52.114 fused_ordering(237) 00:12:52.114 fused_ordering(238) 00:12:52.114 fused_ordering(239) 00:12:52.114 fused_ordering(240) 00:12:52.114 fused_ordering(241) 00:12:52.114 fused_ordering(242) 00:12:52.114 fused_ordering(243) 00:12:52.114 fused_ordering(244) 00:12:52.114 fused_ordering(245) 00:12:52.114 fused_ordering(246) 00:12:52.114 fused_ordering(247) 00:12:52.114 fused_ordering(248) 00:12:52.114 fused_ordering(249) 00:12:52.114 fused_ordering(250) 00:12:52.114 fused_ordering(251) 00:12:52.114 fused_ordering(252) 00:12:52.114 fused_ordering(253) 00:12:52.114 fused_ordering(254) 00:12:52.114 fused_ordering(255) 00:12:52.114 fused_ordering(256) 00:12:52.114 fused_ordering(257) 00:12:52.114 fused_ordering(258) 00:12:52.114 fused_ordering(259) 00:12:52.114 fused_ordering(260) 00:12:52.114 fused_ordering(261) 00:12:52.114 fused_ordering(262) 00:12:52.114 fused_ordering(263) 00:12:52.114 fused_ordering(264) 00:12:52.114 fused_ordering(265) 00:12:52.114 fused_ordering(266) 00:12:52.114 fused_ordering(267) 00:12:52.114 fused_ordering(268) 00:12:52.114 fused_ordering(269) 00:12:52.114 fused_ordering(270) 00:12:52.114 fused_ordering(271) 00:12:52.114 fused_ordering(272) 00:12:52.114 fused_ordering(273) 00:12:52.114 fused_ordering(274) 00:12:52.114 fused_ordering(275) 00:12:52.114 fused_ordering(276) 00:12:52.114 fused_ordering(277) 00:12:52.114 fused_ordering(278) 00:12:52.114 fused_ordering(279) 00:12:52.114 fused_ordering(280) 00:12:52.114 fused_ordering(281) 00:12:52.114 fused_ordering(282) 00:12:52.114 fused_ordering(283) 00:12:52.114 fused_ordering(284) 00:12:52.114 fused_ordering(285) 00:12:52.114 fused_ordering(286) 00:12:52.114 fused_ordering(287) 00:12:52.114 fused_ordering(288) 00:12:52.114 fused_ordering(289) 00:12:52.114 fused_ordering(290) 00:12:52.114 fused_ordering(291) 00:12:52.114 fused_ordering(292) 00:12:52.114 fused_ordering(293) 00:12:52.114 fused_ordering(294) 00:12:52.114 fused_ordering(295) 00:12:52.114 fused_ordering(296) 00:12:52.114 fused_ordering(297) 00:12:52.114 fused_ordering(298) 00:12:52.114 fused_ordering(299) 00:12:52.114 fused_ordering(300) 00:12:52.114 fused_ordering(301) 00:12:52.114 fused_ordering(302) 00:12:52.114 fused_ordering(303) 00:12:52.114 fused_ordering(304) 00:12:52.114 fused_ordering(305) 00:12:52.114 fused_ordering(306) 00:12:52.114 fused_ordering(307) 00:12:52.114 fused_ordering(308) 00:12:52.114 fused_ordering(309) 00:12:52.114 fused_ordering(310) 00:12:52.114 fused_ordering(311) 00:12:52.114 fused_ordering(312) 00:12:52.114 fused_ordering(313) 00:12:52.114 fused_ordering(314) 00:12:52.114 fused_ordering(315) 00:12:52.114 fused_ordering(316) 00:12:52.114 fused_ordering(317) 00:12:52.114 fused_ordering(318) 00:12:52.114 fused_ordering(319) 00:12:52.114 fused_ordering(320) 00:12:52.114 fused_ordering(321) 00:12:52.114 fused_ordering(322) 00:12:52.114 fused_ordering(323) 00:12:52.114 fused_ordering(324) 00:12:52.114 fused_ordering(325) 00:12:52.114 fused_ordering(326) 00:12:52.114 fused_ordering(327) 00:12:52.114 fused_ordering(328) 00:12:52.114 fused_ordering(329) 00:12:52.114 fused_ordering(330) 00:12:52.114 fused_ordering(331) 00:12:52.114 fused_ordering(332) 00:12:52.114 fused_ordering(333) 00:12:52.114 fused_ordering(334) 00:12:52.114 fused_ordering(335) 00:12:52.114 fused_ordering(336) 00:12:52.114 fused_ordering(337) 00:12:52.114 fused_ordering(338) 00:12:52.114 fused_ordering(339) 00:12:52.114 fused_ordering(340) 00:12:52.114 fused_ordering(341) 00:12:52.114 fused_ordering(342) 00:12:52.114 fused_ordering(343) 00:12:52.114 fused_ordering(344) 00:12:52.114 fused_ordering(345) 00:12:52.114 fused_ordering(346) 00:12:52.114 fused_ordering(347) 00:12:52.114 fused_ordering(348) 00:12:52.114 fused_ordering(349) 00:12:52.114 fused_ordering(350) 00:12:52.114 fused_ordering(351) 00:12:52.114 fused_ordering(352) 00:12:52.114 fused_ordering(353) 00:12:52.114 fused_ordering(354) 00:12:52.114 fused_ordering(355) 00:12:52.114 fused_ordering(356) 00:12:52.114 fused_ordering(357) 00:12:52.114 fused_ordering(358) 00:12:52.114 fused_ordering(359) 00:12:52.114 fused_ordering(360) 00:12:52.114 fused_ordering(361) 00:12:52.114 fused_ordering(362) 00:12:52.114 fused_ordering(363) 00:12:52.114 fused_ordering(364) 00:12:52.114 fused_ordering(365) 00:12:52.114 fused_ordering(366) 00:12:52.114 fused_ordering(367) 00:12:52.114 fused_ordering(368) 00:12:52.114 fused_ordering(369) 00:12:52.114 fused_ordering(370) 00:12:52.114 fused_ordering(371) 00:12:52.114 fused_ordering(372) 00:12:52.114 fused_ordering(373) 00:12:52.114 fused_ordering(374) 00:12:52.114 fused_ordering(375) 00:12:52.114 fused_ordering(376) 00:12:52.114 fused_ordering(377) 00:12:52.114 fused_ordering(378) 00:12:52.114 fused_ordering(379) 00:12:52.114 fused_ordering(380) 00:12:52.114 fused_ordering(381) 00:12:52.114 fused_ordering(382) 00:12:52.114 fused_ordering(383) 00:12:52.114 fused_ordering(384) 00:12:52.114 fused_ordering(385) 00:12:52.114 fused_ordering(386) 00:12:52.114 fused_ordering(387) 00:12:52.114 fused_ordering(388) 00:12:52.114 fused_ordering(389) 00:12:52.114 fused_ordering(390) 00:12:52.114 fused_ordering(391) 00:12:52.114 fused_ordering(392) 00:12:52.114 fused_ordering(393) 00:12:52.114 fused_ordering(394) 00:12:52.114 fused_ordering(395) 00:12:52.114 fused_ordering(396) 00:12:52.114 fused_ordering(397) 00:12:52.114 fused_ordering(398) 00:12:52.114 fused_ordering(399) 00:12:52.114 fused_ordering(400) 00:12:52.114 fused_ordering(401) 00:12:52.114 fused_ordering(402) 00:12:52.114 fused_ordering(403) 00:12:52.114 fused_ordering(404) 00:12:52.114 fused_ordering(405) 00:12:52.114 fused_ordering(406) 00:12:52.114 fused_ordering(407) 00:12:52.114 fused_ordering(408) 00:12:52.114 fused_ordering(409) 00:12:52.114 fused_ordering(410) 00:12:52.373 fused_ordering(411) 00:12:52.373 fused_ordering(412) 00:12:52.373 fused_ordering(413) 00:12:52.373 fused_ordering(414) 00:12:52.373 fused_ordering(415) 00:12:52.373 fused_ordering(416) 00:12:52.373 fused_ordering(417) 00:12:52.373 fused_ordering(418) 00:12:52.373 fused_ordering(419) 00:12:52.373 fused_ordering(420) 00:12:52.373 fused_ordering(421) 00:12:52.373 fused_ordering(422) 00:12:52.373 fused_ordering(423) 00:12:52.373 fused_ordering(424) 00:12:52.373 fused_ordering(425) 00:12:52.373 fused_ordering(426) 00:12:52.373 fused_ordering(427) 00:12:52.373 fused_ordering(428) 00:12:52.373 fused_ordering(429) 00:12:52.373 fused_ordering(430) 00:12:52.373 fused_ordering(431) 00:12:52.373 fused_ordering(432) 00:12:52.373 fused_ordering(433) 00:12:52.373 fused_ordering(434) 00:12:52.373 fused_ordering(435) 00:12:52.373 fused_ordering(436) 00:12:52.373 fused_ordering(437) 00:12:52.373 fused_ordering(438) 00:12:52.373 fused_ordering(439) 00:12:52.373 fused_ordering(440) 00:12:52.373 fused_ordering(441) 00:12:52.373 fused_ordering(442) 00:12:52.373 fused_ordering(443) 00:12:52.373 fused_ordering(444) 00:12:52.373 fused_ordering(445) 00:12:52.373 fused_ordering(446) 00:12:52.373 fused_ordering(447) 00:12:52.373 fused_ordering(448) 00:12:52.373 fused_ordering(449) 00:12:52.373 fused_ordering(450) 00:12:52.373 fused_ordering(451) 00:12:52.373 fused_ordering(452) 00:12:52.373 fused_ordering(453) 00:12:52.373 fused_ordering(454) 00:12:52.373 fused_ordering(455) 00:12:52.374 fused_ordering(456) 00:12:52.374 fused_ordering(457) 00:12:52.374 fused_ordering(458) 00:12:52.374 fused_ordering(459) 00:12:52.374 fused_ordering(460) 00:12:52.374 fused_ordering(461) 00:12:52.374 fused_ordering(462) 00:12:52.374 fused_ordering(463) 00:12:52.374 fused_ordering(464) 00:12:52.374 fused_ordering(465) 00:12:52.374 fused_ordering(466) 00:12:52.374 fused_ordering(467) 00:12:52.374 fused_ordering(468) 00:12:52.374 fused_ordering(469) 00:12:52.374 fused_ordering(470) 00:12:52.374 fused_ordering(471) 00:12:52.374 fused_ordering(472) 00:12:52.374 fused_ordering(473) 00:12:52.374 fused_ordering(474) 00:12:52.374 fused_ordering(475) 00:12:52.374 fused_ordering(476) 00:12:52.374 fused_ordering(477) 00:12:52.374 fused_ordering(478) 00:12:52.374 fused_ordering(479) 00:12:52.374 fused_ordering(480) 00:12:52.374 fused_ordering(481) 00:12:52.374 fused_ordering(482) 00:12:52.374 fused_ordering(483) 00:12:52.374 fused_ordering(484) 00:12:52.374 fused_ordering(485) 00:12:52.374 fused_ordering(486) 00:12:52.374 fused_ordering(487) 00:12:52.374 fused_ordering(488) 00:12:52.374 fused_ordering(489) 00:12:52.374 fused_ordering(490) 00:12:52.374 fused_ordering(491) 00:12:52.374 fused_ordering(492) 00:12:52.374 fused_ordering(493) 00:12:52.374 fused_ordering(494) 00:12:52.374 fused_ordering(495) 00:12:52.374 fused_ordering(496) 00:12:52.374 fused_ordering(497) 00:12:52.374 fused_ordering(498) 00:12:52.374 fused_ordering(499) 00:12:52.374 fused_ordering(500) 00:12:52.374 fused_ordering(501) 00:12:52.374 fused_ordering(502) 00:12:52.374 fused_ordering(503) 00:12:52.374 fused_ordering(504) 00:12:52.374 fused_ordering(505) 00:12:52.374 fused_ordering(506) 00:12:52.374 fused_ordering(507) 00:12:52.374 fused_ordering(508) 00:12:52.374 fused_ordering(509) 00:12:52.374 fused_ordering(510) 00:12:52.374 fused_ordering(511) 00:12:52.374 fused_ordering(512) 00:12:52.374 fused_ordering(513) 00:12:52.374 fused_ordering(514) 00:12:52.374 fused_ordering(515) 00:12:52.374 fused_ordering(516) 00:12:52.374 fused_ordering(517) 00:12:52.374 fused_ordering(518) 00:12:52.374 fused_ordering(519) 00:12:52.374 fused_ordering(520) 00:12:52.374 fused_ordering(521) 00:12:52.374 fused_ordering(522) 00:12:52.374 fused_ordering(523) 00:12:52.374 fused_ordering(524) 00:12:52.374 fused_ordering(525) 00:12:52.374 fused_ordering(526) 00:12:52.374 fused_ordering(527) 00:12:52.374 fused_ordering(528) 00:12:52.374 fused_ordering(529) 00:12:52.374 fused_ordering(530) 00:12:52.374 fused_ordering(531) 00:12:52.374 fused_ordering(532) 00:12:52.374 fused_ordering(533) 00:12:52.374 fused_ordering(534) 00:12:52.374 fused_ordering(535) 00:12:52.374 fused_ordering(536) 00:12:52.374 fused_ordering(537) 00:12:52.374 fused_ordering(538) 00:12:52.374 fused_ordering(539) 00:12:52.374 fused_ordering(540) 00:12:52.374 fused_ordering(541) 00:12:52.374 fused_ordering(542) 00:12:52.374 fused_ordering(543) 00:12:52.374 fused_ordering(544) 00:12:52.374 fused_ordering(545) 00:12:52.374 fused_ordering(546) 00:12:52.374 fused_ordering(547) 00:12:52.374 fused_ordering(548) 00:12:52.374 fused_ordering(549) 00:12:52.374 fused_ordering(550) 00:12:52.374 fused_ordering(551) 00:12:52.374 fused_ordering(552) 00:12:52.374 fused_ordering(553) 00:12:52.374 fused_ordering(554) 00:12:52.374 fused_ordering(555) 00:12:52.374 fused_ordering(556) 00:12:52.374 fused_ordering(557) 00:12:52.374 fused_ordering(558) 00:12:52.374 fused_ordering(559) 00:12:52.374 fused_ordering(560) 00:12:52.374 fused_ordering(561) 00:12:52.374 fused_ordering(562) 00:12:52.374 fused_ordering(563) 00:12:52.374 fused_ordering(564) 00:12:52.374 fused_ordering(565) 00:12:52.374 fused_ordering(566) 00:12:52.374 fused_ordering(567) 00:12:52.374 fused_ordering(568) 00:12:52.374 fused_ordering(569) 00:12:52.374 fused_ordering(570) 00:12:52.374 fused_ordering(571) 00:12:52.374 fused_ordering(572) 00:12:52.374 fused_ordering(573) 00:12:52.374 fused_ordering(574) 00:12:52.374 fused_ordering(575) 00:12:52.374 fused_ordering(576) 00:12:52.374 fused_ordering(577) 00:12:52.374 fused_ordering(578) 00:12:52.374 fused_ordering(579) 00:12:52.374 fused_ordering(580) 00:12:52.374 fused_ordering(581) 00:12:52.374 fused_ordering(582) 00:12:52.374 fused_ordering(583) 00:12:52.374 fused_ordering(584) 00:12:52.374 fused_ordering(585) 00:12:52.374 fused_ordering(586) 00:12:52.374 fused_ordering(587) 00:12:52.374 fused_ordering(588) 00:12:52.374 fused_ordering(589) 00:12:52.374 fused_ordering(590) 00:12:52.374 fused_ordering(591) 00:12:52.374 fused_ordering(592) 00:12:52.374 fused_ordering(593) 00:12:52.374 fused_ordering(594) 00:12:52.374 fused_ordering(595) 00:12:52.374 fused_ordering(596) 00:12:52.374 fused_ordering(597) 00:12:52.374 fused_ordering(598) 00:12:52.374 fused_ordering(599) 00:12:52.374 fused_ordering(600) 00:12:52.374 fused_ordering(601) 00:12:52.374 fused_ordering(602) 00:12:52.374 fused_ordering(603) 00:12:52.374 fused_ordering(604) 00:12:52.374 fused_ordering(605) 00:12:52.374 fused_ordering(606) 00:12:52.374 fused_ordering(607) 00:12:52.374 fused_ordering(608) 00:12:52.374 fused_ordering(609) 00:12:52.374 fused_ordering(610) 00:12:52.374 fused_ordering(611) 00:12:52.374 fused_ordering(612) 00:12:52.374 fused_ordering(613) 00:12:52.374 fused_ordering(614) 00:12:52.374 fused_ordering(615) 00:12:52.942 fused_ordering(616) 00:12:52.942 fused_ordering(617) 00:12:52.942 fused_ordering(618) 00:12:52.942 fused_ordering(619) 00:12:52.942 fused_ordering(620) 00:12:52.942 fused_ordering(621) 00:12:52.942 fused_ordering(622) 00:12:52.942 fused_ordering(623) 00:12:52.942 fused_ordering(624) 00:12:52.942 fused_ordering(625) 00:12:52.942 fused_ordering(626) 00:12:52.942 fused_ordering(627) 00:12:52.942 fused_ordering(628) 00:12:52.942 fused_ordering(629) 00:12:52.942 fused_ordering(630) 00:12:52.942 fused_ordering(631) 00:12:52.942 fused_ordering(632) 00:12:52.942 fused_ordering(633) 00:12:52.942 fused_ordering(634) 00:12:52.942 fused_ordering(635) 00:12:52.942 fused_ordering(636) 00:12:52.942 fused_ordering(637) 00:12:52.942 fused_ordering(638) 00:12:52.942 fused_ordering(639) 00:12:52.942 fused_ordering(640) 00:12:52.942 fused_ordering(641) 00:12:52.942 fused_ordering(642) 00:12:52.942 fused_ordering(643) 00:12:52.942 fused_ordering(644) 00:12:52.942 fused_ordering(645) 00:12:52.942 fused_ordering(646) 00:12:52.942 fused_ordering(647) 00:12:52.942 fused_ordering(648) 00:12:52.942 fused_ordering(649) 00:12:52.942 fused_ordering(650) 00:12:52.942 fused_ordering(651) 00:12:52.942 fused_ordering(652) 00:12:52.942 fused_ordering(653) 00:12:52.942 fused_ordering(654) 00:12:52.942 fused_ordering(655) 00:12:52.942 fused_ordering(656) 00:12:52.942 fused_ordering(657) 00:12:52.942 fused_ordering(658) 00:12:52.942 fused_ordering(659) 00:12:52.942 fused_ordering(660) 00:12:52.942 fused_ordering(661) 00:12:52.942 fused_ordering(662) 00:12:52.942 fused_ordering(663) 00:12:52.942 fused_ordering(664) 00:12:52.942 fused_ordering(665) 00:12:52.942 fused_ordering(666) 00:12:52.942 fused_ordering(667) 00:12:52.942 fused_ordering(668) 00:12:52.942 fused_ordering(669) 00:12:52.942 fused_ordering(670) 00:12:52.942 fused_ordering(671) 00:12:52.942 fused_ordering(672) 00:12:52.942 fused_ordering(673) 00:12:52.942 fused_ordering(674) 00:12:52.942 fused_ordering(675) 00:12:52.942 fused_ordering(676) 00:12:52.942 fused_ordering(677) 00:12:52.942 fused_ordering(678) 00:12:52.942 fused_ordering(679) 00:12:52.942 fused_ordering(680) 00:12:52.942 fused_ordering(681) 00:12:52.942 fused_ordering(682) 00:12:52.942 fused_ordering(683) 00:12:52.942 fused_ordering(684) 00:12:52.942 fused_ordering(685) 00:12:52.942 fused_ordering(686) 00:12:52.942 fused_ordering(687) 00:12:52.942 fused_ordering(688) 00:12:52.942 fused_ordering(689) 00:12:52.942 fused_ordering(690) 00:12:52.942 fused_ordering(691) 00:12:52.942 fused_ordering(692) 00:12:52.942 fused_ordering(693) 00:12:52.942 fused_ordering(694) 00:12:52.942 fused_ordering(695) 00:12:52.942 fused_ordering(696) 00:12:52.942 fused_ordering(697) 00:12:52.942 fused_ordering(698) 00:12:52.942 fused_ordering(699) 00:12:52.942 fused_ordering(700) 00:12:52.942 fused_ordering(701) 00:12:52.942 fused_ordering(702) 00:12:52.942 fused_ordering(703) 00:12:52.942 fused_ordering(704) 00:12:52.942 fused_ordering(705) 00:12:52.942 fused_ordering(706) 00:12:52.942 fused_ordering(707) 00:12:52.942 fused_ordering(708) 00:12:52.942 fused_ordering(709) 00:12:52.942 fused_ordering(710) 00:12:52.942 fused_ordering(711) 00:12:52.942 fused_ordering(712) 00:12:52.942 fused_ordering(713) 00:12:52.942 fused_ordering(714) 00:12:52.942 fused_ordering(715) 00:12:52.942 fused_ordering(716) 00:12:52.942 fused_ordering(717) 00:12:52.942 fused_ordering(718) 00:12:52.942 fused_ordering(719) 00:12:52.942 fused_ordering(720) 00:12:52.942 fused_ordering(721) 00:12:52.942 fused_ordering(722) 00:12:52.942 fused_ordering(723) 00:12:52.942 fused_ordering(724) 00:12:52.942 fused_ordering(725) 00:12:52.942 fused_ordering(726) 00:12:52.942 fused_ordering(727) 00:12:52.942 fused_ordering(728) 00:12:52.942 fused_ordering(729) 00:12:52.942 fused_ordering(730) 00:12:52.942 fused_ordering(731) 00:12:52.942 fused_ordering(732) 00:12:52.942 fused_ordering(733) 00:12:52.942 fused_ordering(734) 00:12:52.942 fused_ordering(735) 00:12:52.942 fused_ordering(736) 00:12:52.942 fused_ordering(737) 00:12:52.942 fused_ordering(738) 00:12:52.942 fused_ordering(739) 00:12:52.942 fused_ordering(740) 00:12:52.942 fused_ordering(741) 00:12:52.942 fused_ordering(742) 00:12:52.942 fused_ordering(743) 00:12:52.942 fused_ordering(744) 00:12:52.942 fused_ordering(745) 00:12:52.942 fused_ordering(746) 00:12:52.942 fused_ordering(747) 00:12:52.942 fused_ordering(748) 00:12:52.942 fused_ordering(749) 00:12:52.942 fused_ordering(750) 00:12:52.942 fused_ordering(751) 00:12:52.942 fused_ordering(752) 00:12:52.942 fused_ordering(753) 00:12:52.942 fused_ordering(754) 00:12:52.942 fused_ordering(755) 00:12:52.942 fused_ordering(756) 00:12:52.942 fused_ordering(757) 00:12:52.942 fused_ordering(758) 00:12:52.942 fused_ordering(759) 00:12:52.942 fused_ordering(760) 00:12:52.942 fused_ordering(761) 00:12:52.942 fused_ordering(762) 00:12:52.942 fused_ordering(763) 00:12:52.942 fused_ordering(764) 00:12:52.942 fused_ordering(765) 00:12:52.942 fused_ordering(766) 00:12:52.942 fused_ordering(767) 00:12:52.942 fused_ordering(768) 00:12:52.942 fused_ordering(769) 00:12:52.942 fused_ordering(770) 00:12:52.942 fused_ordering(771) 00:12:52.942 fused_ordering(772) 00:12:52.942 fused_ordering(773) 00:12:52.942 fused_ordering(774) 00:12:52.942 fused_ordering(775) 00:12:52.942 fused_ordering(776) 00:12:52.942 fused_ordering(777) 00:12:52.942 fused_ordering(778) 00:12:52.942 fused_ordering(779) 00:12:52.942 fused_ordering(780) 00:12:52.942 fused_ordering(781) 00:12:52.942 fused_ordering(782) 00:12:52.942 fused_ordering(783) 00:12:52.942 fused_ordering(784) 00:12:52.942 fused_ordering(785) 00:12:52.942 fused_ordering(786) 00:12:52.942 fused_ordering(787) 00:12:52.942 fused_ordering(788) 00:12:52.942 fused_ordering(789) 00:12:52.942 fused_ordering(790) 00:12:52.942 fused_ordering(791) 00:12:52.942 fused_ordering(792) 00:12:52.942 fused_ordering(793) 00:12:52.942 fused_ordering(794) 00:12:52.942 fused_ordering(795) 00:12:52.942 fused_ordering(796) 00:12:52.942 fused_ordering(797) 00:12:52.942 fused_ordering(798) 00:12:52.942 fused_ordering(799) 00:12:52.942 fused_ordering(800) 00:12:52.942 fused_ordering(801) 00:12:52.942 fused_ordering(802) 00:12:52.942 fused_ordering(803) 00:12:52.942 fused_ordering(804) 00:12:52.942 fused_ordering(805) 00:12:52.942 fused_ordering(806) 00:12:52.942 fused_ordering(807) 00:12:52.942 fused_ordering(808) 00:12:52.942 fused_ordering(809) 00:12:52.942 fused_ordering(810) 00:12:52.942 fused_ordering(811) 00:12:52.942 fused_ordering(812) 00:12:52.942 fused_ordering(813) 00:12:52.942 fused_ordering(814) 00:12:52.942 fused_ordering(815) 00:12:52.942 fused_ordering(816) 00:12:52.942 fused_ordering(817) 00:12:52.942 fused_ordering(818) 00:12:52.942 fused_ordering(819) 00:12:52.942 fused_ordering(820) 00:12:53.510 fused_ordering(821) 00:12:53.510 fused_ordering(822) 00:12:53.510 fused_ordering(823) 00:12:53.510 fused_ordering(824) 00:12:53.510 fused_ordering(825) 00:12:53.510 fused_ordering(826) 00:12:53.510 fused_ordering(827) 00:12:53.510 fused_ordering(828) 00:12:53.510 fused_ordering(829) 00:12:53.510 fused_ordering(830) 00:12:53.510 fused_ordering(831) 00:12:53.510 fused_ordering(832) 00:12:53.510 fused_ordering(833) 00:12:53.510 fused_ordering(834) 00:12:53.510 fused_ordering(835) 00:12:53.510 fused_ordering(836) 00:12:53.510 fused_ordering(837) 00:12:53.510 fused_ordering(838) 00:12:53.510 fused_ordering(839) 00:12:53.510 fused_ordering(840) 00:12:53.510 fused_ordering(841) 00:12:53.510 fused_ordering(842) 00:12:53.510 fused_ordering(843) 00:12:53.510 fused_ordering(844) 00:12:53.510 fused_ordering(845) 00:12:53.510 fused_ordering(846) 00:12:53.510 fused_ordering(847) 00:12:53.510 fused_ordering(848) 00:12:53.510 fused_ordering(849) 00:12:53.510 fused_ordering(850) 00:12:53.510 fused_ordering(851) 00:12:53.510 fused_ordering(852) 00:12:53.510 fused_ordering(853) 00:12:53.510 fused_ordering(854) 00:12:53.510 fused_ordering(855) 00:12:53.510 fused_ordering(856) 00:12:53.510 fused_ordering(857) 00:12:53.510 fused_ordering(858) 00:12:53.510 fused_ordering(859) 00:12:53.510 fused_ordering(860) 00:12:53.510 fused_ordering(861) 00:12:53.510 fused_ordering(862) 00:12:53.510 fused_ordering(863) 00:12:53.510 fused_ordering(864) 00:12:53.510 fused_ordering(865) 00:12:53.510 fused_ordering(866) 00:12:53.510 fused_ordering(867) 00:12:53.510 fused_ordering(868) 00:12:53.510 fused_ordering(869) 00:12:53.510 fused_ordering(870) 00:12:53.510 fused_ordering(871) 00:12:53.510 fused_ordering(872) 00:12:53.510 fused_ordering(873) 00:12:53.510 fused_ordering(874) 00:12:53.510 fused_ordering(875) 00:12:53.510 fused_ordering(876) 00:12:53.510 fused_ordering(877) 00:12:53.510 fused_ordering(878) 00:12:53.510 fused_ordering(879) 00:12:53.510 fused_ordering(880) 00:12:53.510 fused_ordering(881) 00:12:53.510 fused_ordering(882) 00:12:53.510 fused_ordering(883) 00:12:53.510 fused_ordering(884) 00:12:53.510 fused_ordering(885) 00:12:53.510 fused_ordering(886) 00:12:53.510 fused_ordering(887) 00:12:53.510 fused_ordering(888) 00:12:53.510 fused_ordering(889) 00:12:53.510 fused_ordering(890) 00:12:53.510 fused_ordering(891) 00:12:53.510 fused_ordering(892) 00:12:53.510 fused_ordering(893) 00:12:53.510 fused_ordering(894) 00:12:53.510 fused_ordering(895) 00:12:53.510 fused_ordering(896) 00:12:53.510 fused_ordering(897) 00:12:53.510 fused_ordering(898) 00:12:53.510 fused_ordering(899) 00:12:53.510 fused_ordering(900) 00:12:53.510 fused_ordering(901) 00:12:53.510 fused_ordering(902) 00:12:53.510 fused_ordering(903) 00:12:53.510 fused_ordering(904) 00:12:53.510 fused_ordering(905) 00:12:53.510 fused_ordering(906) 00:12:53.510 fused_ordering(907) 00:12:53.510 fused_ordering(908) 00:12:53.510 fused_ordering(909) 00:12:53.510 fused_ordering(910) 00:12:53.510 fused_ordering(911) 00:12:53.510 fused_ordering(912) 00:12:53.510 fused_ordering(913) 00:12:53.510 fused_ordering(914) 00:12:53.510 fused_ordering(915) 00:12:53.510 fused_ordering(916) 00:12:53.510 fused_ordering(917) 00:12:53.510 fused_ordering(918) 00:12:53.510 fused_ordering(919) 00:12:53.510 fused_ordering(920) 00:12:53.510 fused_ordering(921) 00:12:53.510 fused_ordering(922) 00:12:53.510 fused_ordering(923) 00:12:53.510 fused_ordering(924) 00:12:53.510 fused_ordering(925) 00:12:53.510 fused_ordering(926) 00:12:53.510 fused_ordering(927) 00:12:53.510 fused_ordering(928) 00:12:53.510 fused_ordering(929) 00:12:53.510 fused_ordering(930) 00:12:53.510 fused_ordering(931) 00:12:53.510 fused_ordering(932) 00:12:53.510 fused_ordering(933) 00:12:53.510 fused_ordering(934) 00:12:53.510 fused_ordering(935) 00:12:53.510 fused_ordering(936) 00:12:53.510 fused_ordering(937) 00:12:53.510 fused_ordering(938) 00:12:53.510 fused_ordering(939) 00:12:53.510 fused_ordering(940) 00:12:53.510 fused_ordering(941) 00:12:53.510 fused_ordering(942) 00:12:53.510 fused_ordering(943) 00:12:53.510 fused_ordering(944) 00:12:53.510 fused_ordering(945) 00:12:53.510 fused_ordering(946) 00:12:53.510 fused_ordering(947) 00:12:53.510 fused_ordering(948) 00:12:53.510 fused_ordering(949) 00:12:53.510 fused_ordering(950) 00:12:53.510 fused_ordering(951) 00:12:53.510 fused_ordering(952) 00:12:53.510 fused_ordering(953) 00:12:53.510 fused_ordering(954) 00:12:53.510 fused_ordering(955) 00:12:53.510 fused_ordering(956) 00:12:53.510 fused_ordering(957) 00:12:53.510 fused_ordering(958) 00:12:53.510 fused_ordering(959) 00:12:53.510 fused_ordering(960) 00:12:53.510 fused_ordering(961) 00:12:53.510 fused_ordering(962) 00:12:53.510 fused_ordering(963) 00:12:53.510 fused_ordering(964) 00:12:53.510 fused_ordering(965) 00:12:53.510 fused_ordering(966) 00:12:53.510 fused_ordering(967) 00:12:53.510 fused_ordering(968) 00:12:53.510 fused_ordering(969) 00:12:53.511 fused_ordering(970) 00:12:53.511 fused_ordering(971) 00:12:53.511 fused_ordering(972) 00:12:53.511 fused_ordering(973) 00:12:53.511 fused_ordering(974) 00:12:53.511 fused_ordering(975) 00:12:53.511 fused_ordering(976) 00:12:53.511 fused_ordering(977) 00:12:53.511 fused_ordering(978) 00:12:53.511 fused_ordering(979) 00:12:53.511 fused_ordering(980) 00:12:53.511 fused_ordering(981) 00:12:53.511 fused_ordering(982) 00:12:53.511 fused_ordering(983) 00:12:53.511 fused_ordering(984) 00:12:53.511 fused_ordering(985) 00:12:53.511 fused_ordering(986) 00:12:53.511 fused_ordering(987) 00:12:53.511 fused_ordering(988) 00:12:53.511 fused_ordering(989) 00:12:53.511 fused_ordering(990) 00:12:53.511 fused_ordering(991) 00:12:53.511 fused_ordering(992) 00:12:53.511 fused_ordering(993) 00:12:53.511 fused_ordering(994) 00:12:53.511 fused_ordering(995) 00:12:53.511 fused_ordering(996) 00:12:53.511 fused_ordering(997) 00:12:53.511 fused_ordering(998) 00:12:53.511 fused_ordering(999) 00:12:53.511 fused_ordering(1000) 00:12:53.511 fused_ordering(1001) 00:12:53.511 fused_ordering(1002) 00:12:53.511 fused_ordering(1003) 00:12:53.511 fused_ordering(1004) 00:12:53.511 fused_ordering(1005) 00:12:53.511 fused_ordering(1006) 00:12:53.511 fused_ordering(1007) 00:12:53.511 fused_ordering(1008) 00:12:53.511 fused_ordering(1009) 00:12:53.511 fused_ordering(1010) 00:12:53.511 fused_ordering(1011) 00:12:53.511 fused_ordering(1012) 00:12:53.511 fused_ordering(1013) 00:12:53.511 fused_ordering(1014) 00:12:53.511 fused_ordering(1015) 00:12:53.511 fused_ordering(1016) 00:12:53.511 fused_ordering(1017) 00:12:53.511 fused_ordering(1018) 00:12:53.511 fused_ordering(1019) 00:12:53.511 fused_ordering(1020) 00:12:53.511 fused_ordering(1021) 00:12:53.511 fused_ordering(1022) 00:12:53.511 fused_ordering(1023) 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.511 rmmod nvme_tcp 00:12:53.511 rmmod nvme_fabrics 00:12:53.511 rmmod nvme_keyring 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2685072 ']' 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2685072 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2685072 ']' 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2685072 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2685072 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2685072' 00:12:53.511 killing process with pid 2685072 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2685072 00:12:53.511 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2685072 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.770 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.675 00:12:55.675 real 0m10.851s 00:12:55.675 user 0m5.184s 00:12:55.675 sys 0m5.905s 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.675 ************************************ 00:12:55.675 END TEST nvmf_fused_ordering 00:12:55.675 ************************************ 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.675 ************************************ 00:12:55.675 START TEST nvmf_ns_masking 00:12:55.675 ************************************ 00:12:55.675 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:55.934 * Looking for test storage... 00:12:55.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.934 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d92df951-b4bc-4bb5-b03c-65ddf155e759 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=146674a3-a5ae-4dc4-ba98-eab5cd716345 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b561a427-5482-4309-b008-068ccd6f9095 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.935 16:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:02.504 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.504 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.504 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:02.505 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:02.505 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:02.505 Found net devices under 0000:86:00.0: cvl_0_0 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:02.505 Found net devices under 0000:86:00.1: cvl_0_1 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:13:02.505 00:13:02.505 --- 10.0.0.2 ping statistics --- 00:13:02.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.505 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:13:02.505 00:13:02.505 --- 10.0.0.1 ping statistics --- 00:13:02.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.505 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.505 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2689142 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2689142 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2689142 ']' 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.506 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:02.506 [2024-11-20 16:06:02.745385] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:13:02.506 [2024-11-20 16:06:02.745431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.506 [2024-11-20 16:06:02.826056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.506 [2024-11-20 16:06:02.865356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.506 [2024-11-20 16:06:02.865391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.506 [2024-11-20 16:06:02.865398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.506 [2024-11-20 16:06:02.865404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.506 [2024-11-20 16:06:02.865409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.506 [2024-11-20 16:06:02.865997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.765 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.765 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:02.765 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.765 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.765 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:03.023 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.024 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:03.024 [2024-11-20 16:06:03.792881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.024 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:03.024 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:03.024 16:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:03.283 Malloc1 00:13:03.283 16:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:03.541 Malloc2 00:13:03.541 16:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.800 16:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:04.059 16:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.059 [2024-11-20 16:06:04.831813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.059 16:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:04.059 16:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b561a427-5482-4309-b008-068ccd6f9095 -a 10.0.0.2 -s 4420 -i 4 00:13:04.318 16:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.318 16:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.318 16:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.318 16:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.318 16:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.848 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:06.848 [ 0]:0x1 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7cbe1b07cc384b638d572d937d69bf99 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7cbe1b07cc384b638d572d937d69bf99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.849 [ 0]:0x1 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7cbe1b07cc384b638d572d937d69bf99 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7cbe1b07cc384b638d572d937d69bf99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.849 [ 1]:0x2 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.849 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.107 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:07.365 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:07.365 16:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b561a427-5482-4309-b008-068ccd6f9095 -a 10.0.0.2 -s 4420 -i 4 00:13:07.365 16:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:07.365 16:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:07.365 16:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.365 16:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:07.365 16:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:07.365 16:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:09.896 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:09.897 [ 0]:0x2 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:09.897 [ 0]:0x1 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7cbe1b07cc384b638d572d937d69bf99 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7cbe1b07cc384b638d572d937d69bf99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:09.897 [ 1]:0x2 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:09.897 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:10.158 [ 0]:0x2 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.158 16:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:10.419 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:10.419 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b561a427-5482-4309-b008-068ccd6f9095 -a 10.0.0.2 -s 4420 -i 4 00:13:10.677 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:10.677 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:10.677 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.677 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:10.677 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:10.677 16:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.580 [ 0]:0x1 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7cbe1b07cc384b638d572d937d69bf99 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7cbe1b07cc384b638d572d937d69bf99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.580 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:12.580 [ 1]:0x2 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.839 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:13.097 [ 0]:0x2 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:13.097 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:13.356 [2024-11-20 16:06:14.009929] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:13.356 request: 00:13:13.356 { 00:13:13.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.356 "nsid": 2, 00:13:13.356 "host": "nqn.2016-06.io.spdk:host1", 00:13:13.356 "method": "nvmf_ns_remove_host", 00:13:13.356 "req_id": 1 00:13:13.356 } 00:13:13.356 Got JSON-RPC error response 00:13:13.356 response: 00:13:13.356 { 00:13:13.356 "code": -32602, 00:13:13.356 "message": "Invalid parameters" 00:13:13.356 } 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.356 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:13.357 [ 0]:0x2 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:13.357 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48331f65452d421fb5757f89c0f677a8 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48331f65452d421fb5757f89c0f677a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2691547 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2691547 /var/tmp/host.sock 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2691547 ']' 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:13.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.615 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:13.615 [2024-11-20 16:06:14.360501] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:13:13.615 [2024-11-20 16:06:14.360549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691547 ] 00:13:13.615 [2024-11-20 16:06:14.439937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.874 [2024-11-20 16:06:14.481282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.874 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.874 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:13.874 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.132 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.390 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d92df951-b4bc-4bb5-b03c-65ddf155e759 00:13:14.390 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:14.390 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D92DF951B4BC4BB5B03C65DDF155E759 -i 00:13:14.649 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 146674a3-a5ae-4dc4-ba98-eab5cd716345 00:13:14.649 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:14.649 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 146674A3A5AE4DC4BA98EAB5CD716345 -i 00:13:14.907 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:14.907 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:15.166 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:15.166 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:15.734 nvme0n1 00:13:15.734 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:15.734 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:15.993 nvme1n2 00:13:15.993 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:15.993 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:15.993 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:15.993 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:15.993 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:15.993 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:16.252 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:16.252 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:16.252 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:16.252 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d92df951-b4bc-4bb5-b03c-65ddf155e759 == \d\9\2\d\f\9\5\1\-\b\4\b\c\-\4\b\b\5\-\b\0\3\c\-\6\5\d\d\f\1\5\5\e\7\5\9 ]] 00:13:16.252 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:16.252 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:16.252 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:16.510 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 146674a3-a5ae-4dc4-ba98-eab5cd716345 == \1\4\6\6\7\4\a\3\-\a\5\a\e\-\4\d\c\4\-\b\a\9\8\-\e\a\b\5\c\d\7\1\6\3\4\5 ]] 00:13:16.510 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.768 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d92df951-b4bc-4bb5-b03c-65ddf155e759 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D92DF951B4BC4BB5B03C65DDF155E759 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D92DF951B4BC4BB5B03C65DDF155E759 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D92DF951B4BC4BB5B03C65DDF155E759 00:13:17.026 [2024-11-20 16:06:17.804451] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:17.026 [2024-11-20 16:06:17.804490] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:17.026 [2024-11-20 16:06:17.804499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.026 request: 00:13:17.026 { 00:13:17.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.026 "namespace": { 00:13:17.026 "bdev_name": "invalid", 00:13:17.026 "nsid": 1, 00:13:17.026 "nguid": "D92DF951B4BC4BB5B03C65DDF155E759", 00:13:17.026 "no_auto_visible": false 00:13:17.026 }, 00:13:17.026 "method": "nvmf_subsystem_add_ns", 00:13:17.026 "req_id": 1 00:13:17.026 } 00:13:17.026 Got JSON-RPC error response 00:13:17.026 response: 00:13:17.026 { 00:13:17.026 "code": -32602, 00:13:17.026 "message": "Invalid parameters" 00:13:17.026 } 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d92df951-b4bc-4bb5-b03c-65ddf155e759 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:17.026 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D92DF951B4BC4BB5B03C65DDF155E759 -i 00:13:17.284 16:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2691547 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2691547 ']' 00:13:19.816 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2691547 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2691547 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2691547' 00:13:19.817 killing process with pid 2691547 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2691547 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2691547 00:13:19.817 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.076 rmmod nvme_tcp 00:13:20.076 rmmod nvme_fabrics 00:13:20.076 rmmod nvme_keyring 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2689142 ']' 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2689142 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2689142 ']' 00:13:20.076 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2689142 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689142 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689142' 00:13:20.334 killing process with pid 2689142 00:13:20.334 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2689142 00:13:20.335 16:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2689142 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.335 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.593 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.593 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.593 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.593 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.593 16:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.500 00:13:22.500 real 0m26.733s 00:13:22.500 user 0m32.124s 00:13:22.500 sys 0m7.173s 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:22.500 ************************************ 00:13:22.500 END TEST nvmf_ns_masking 00:13:22.500 ************************************ 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.500 ************************************ 00:13:22.500 START TEST nvmf_nvme_cli 00:13:22.500 ************************************ 00:13:22.500 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:22.760 * Looking for test storage... 00:13:22.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.760 --rc genhtml_branch_coverage=1 00:13:22.760 --rc genhtml_function_coverage=1 00:13:22.760 --rc genhtml_legend=1 00:13:22.760 --rc geninfo_all_blocks=1 00:13:22.760 --rc geninfo_unexecuted_blocks=1 00:13:22.760 00:13:22.760 ' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.760 --rc genhtml_branch_coverage=1 00:13:22.760 --rc genhtml_function_coverage=1 00:13:22.760 --rc genhtml_legend=1 00:13:22.760 --rc geninfo_all_blocks=1 00:13:22.760 --rc geninfo_unexecuted_blocks=1 00:13:22.760 00:13:22.760 ' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.760 --rc genhtml_branch_coverage=1 00:13:22.760 --rc genhtml_function_coverage=1 00:13:22.760 --rc genhtml_legend=1 00:13:22.760 --rc geninfo_all_blocks=1 00:13:22.760 --rc geninfo_unexecuted_blocks=1 00:13:22.760 00:13:22.760 ' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.760 --rc genhtml_branch_coverage=1 00:13:22.760 --rc genhtml_function_coverage=1 00:13:22.760 --rc genhtml_legend=1 00:13:22.760 --rc geninfo_all_blocks=1 00:13:22.760 --rc geninfo_unexecuted_blocks=1 00:13:22.760 00:13:22.760 ' 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.760 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.761 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:29.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:29.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:29.331 Found net devices under 0000:86:00.0: cvl_0_0 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:29.331 Found net devices under 0000:86:00.1: cvl_0_1 00:13:29.331 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:13:29.332 00:13:29.332 --- 10.0.0.2 ping statistics --- 00:13:29.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.332 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:13:29.332 00:13:29.332 --- 10.0.0.1 ping statistics --- 00:13:29.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.332 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2696251 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2696251 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2696251 ']' 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.332 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.332 [2024-11-20 16:06:29.544901] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:13:29.332 [2024-11-20 16:06:29.544944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.332 [2024-11-20 16:06:29.624321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.332 [2024-11-20 16:06:29.667758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.332 [2024-11-20 16:06:29.667797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.332 [2024-11-20 16:06:29.667804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.332 [2024-11-20 16:06:29.667810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.332 [2024-11-20 16:06:29.667815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.332 [2024-11-20 16:06:29.669294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.332 [2024-11-20 16:06:29.669401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.332 [2024-11-20 16:06:29.669507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.332 [2024-11-20 16:06:29.669508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.591 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.591 [2024-11-20 16:06:30.425295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 Malloc0 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 Malloc1 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 [2024-11-20 16:06:30.523845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:29.850 00:13:29.850 Discovery Log Number of Records 2, Generation counter 2 00:13:29.850 =====Discovery Log Entry 0====== 00:13:29.850 trtype: tcp 00:13:29.850 adrfam: ipv4 00:13:29.850 subtype: current discovery subsystem 00:13:29.850 treq: not required 00:13:29.850 portid: 0 00:13:29.850 trsvcid: 4420 00:13:29.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:29.850 traddr: 10.0.0.2 00:13:29.850 eflags: explicit discovery connections, duplicate discovery information 00:13:29.850 sectype: none 00:13:29.850 =====Discovery Log Entry 1====== 00:13:29.850 trtype: tcp 00:13:29.850 adrfam: ipv4 00:13:29.850 subtype: nvme subsystem 00:13:29.850 treq: not required 00:13:29.850 portid: 0 00:13:29.850 trsvcid: 4420 00:13:29.850 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:29.850 traddr: 10.0.0.2 00:13:29.850 eflags: none 00:13:29.850 sectype: none 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:29.850 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.227 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:31.227 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.227 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.227 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:31.227 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:31.227 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:33.125 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:33.125 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:33.125 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.125 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:33.126 /dev/nvme0n2 ]] 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.126 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:33.384 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.643 rmmod nvme_tcp 00:13:33.643 rmmod nvme_fabrics 00:13:33.643 rmmod nvme_keyring 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2696251 ']' 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2696251 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2696251 ']' 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2696251 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2696251 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2696251' 00:13:33.643 killing process with pid 2696251 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2696251 00:13:33.643 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2696251 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.902 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.434 00:13:36.434 real 0m13.444s 00:13:36.434 user 0m21.836s 00:13:36.434 sys 0m5.109s 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 ************************************ 00:13:36.434 END TEST nvmf_nvme_cli 00:13:36.434 ************************************ 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 ************************************ 00:13:36.434 START TEST nvmf_vfio_user 00:13:36.434 ************************************ 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:36.434 * Looking for test storage... 00:13:36.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:36.434 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.434 --rc genhtml_branch_coverage=1 00:13:36.434 --rc genhtml_function_coverage=1 00:13:36.434 --rc genhtml_legend=1 00:13:36.434 --rc geninfo_all_blocks=1 00:13:36.434 --rc geninfo_unexecuted_blocks=1 00:13:36.434 00:13:36.434 ' 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.434 --rc genhtml_branch_coverage=1 00:13:36.434 --rc genhtml_function_coverage=1 00:13:36.434 --rc genhtml_legend=1 00:13:36.434 --rc geninfo_all_blocks=1 00:13:36.434 --rc geninfo_unexecuted_blocks=1 00:13:36.434 00:13:36.434 ' 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.434 --rc genhtml_branch_coverage=1 00:13:36.434 --rc genhtml_function_coverage=1 00:13:36.434 --rc genhtml_legend=1 00:13:36.434 --rc geninfo_all_blocks=1 00:13:36.434 --rc geninfo_unexecuted_blocks=1 00:13:36.434 00:13:36.434 ' 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.434 --rc genhtml_branch_coverage=1 00:13:36.434 --rc genhtml_function_coverage=1 00:13:36.434 --rc genhtml_legend=1 00:13:36.434 --rc geninfo_all_blocks=1 00:13:36.434 --rc geninfo_unexecuted_blocks=1 00:13:36.434 00:13:36.434 ' 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.434 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2697643 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2697643' 00:13:36.435 Process pid: 2697643 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2697643 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2697643 ']' 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.435 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:36.435 [2024-11-20 16:06:37.095217] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:13:36.435 [2024-11-20 16:06:37.095271] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.435 [2024-11-20 16:06:37.168672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.435 [2024-11-20 16:06:37.211516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.435 [2024-11-20 16:06:37.211556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.435 [2024-11-20 16:06:37.211563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.435 [2024-11-20 16:06:37.211569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.435 [2024-11-20 16:06:37.211574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.435 [2024-11-20 16:06:37.213108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.435 [2024-11-20 16:06:37.213223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.435 [2024-11-20 16:06:37.213329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.435 [2024-11-20 16:06:37.213330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.694 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.694 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:36.694 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:37.628 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:37.886 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:37.886 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:37.886 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:37.886 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:37.886 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:38.145 Malloc1 00:13:38.145 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:38.145 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:38.403 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:38.660 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:38.660 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:38.660 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:38.918 Malloc2 00:13:38.918 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:38.918 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:39.177 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:39.437 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:39.437 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:39.437 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:39.437 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:39.437 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:39.437 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:39.437 [2024-11-20 16:06:40.202841] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:13:39.437 [2024-11-20 16:06:40.202875] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698247 ] 00:13:39.437 [2024-11-20 16:06:40.242856] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:39.437 [2024-11-20 16:06:40.251302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:39.437 [2024-11-20 16:06:40.251324] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbf74e97000 00:13:39.437 [2024-11-20 16:06:40.252300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.253303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.254307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.255312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.256322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.257326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.258332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.259342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.437 [2024-11-20 16:06:40.260348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:39.437 [2024-11-20 16:06:40.260357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbf74e8c000 00:13:39.437 [2024-11-20 16:06:40.261298] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:39.437 [2024-11-20 16:06:40.270902] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:39.437 [2024-11-20 16:06:40.270927] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:39.697 [2024-11-20 16:06:40.275432] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:39.697 [2024-11-20 16:06:40.275475] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:39.697 [2024-11-20 16:06:40.275545] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:39.697 [2024-11-20 16:06:40.275560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:39.697 [2024-11-20 16:06:40.275565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:39.697 [2024-11-20 16:06:40.276432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:39.697 [2024-11-20 16:06:40.276440] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:39.697 [2024-11-20 16:06:40.276447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:39.697 [2024-11-20 16:06:40.277437] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:39.697 [2024-11-20 16:06:40.277445] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:39.697 [2024-11-20 16:06:40.277454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:39.697 [2024-11-20 16:06:40.278448] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:39.697 [2024-11-20 16:06:40.278456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:39.697 [2024-11-20 16:06:40.279453] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:39.697 [2024-11-20 16:06:40.279461] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:39.697 [2024-11-20 16:06:40.279466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:39.698 [2024-11-20 16:06:40.279472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:39.698 [2024-11-20 16:06:40.279579] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:39.698 [2024-11-20 16:06:40.279583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:39.698 [2024-11-20 16:06:40.279588] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:39.698 [2024-11-20 16:06:40.280463] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:39.698 [2024-11-20 16:06:40.281465] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:39.698 [2024-11-20 16:06:40.282469] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:39.698 [2024-11-20 16:06:40.283469] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.698 [2024-11-20 16:06:40.283534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:39.698 [2024-11-20 16:06:40.284477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:39.698 [2024-11-20 16:06:40.284484] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:39.698 [2024-11-20 16:06:40.284488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:39.698 [2024-11-20 16:06:40.284513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284526] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:39.698 [2024-11-20 16:06:40.284531] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.698 [2024-11-20 16:06:40.284534] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.698 [2024-11-20 16:06:40.284547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:39.698 [2024-11-20 16:06:40.284606] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:39.698 [2024-11-20 16:06:40.284610] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:39.698 [2024-11-20 16:06:40.284614] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:39.698 [2024-11-20 16:06:40.284618] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:39.698 [2024-11-20 16:06:40.284625] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:39.698 [2024-11-20 16:06:40.284629] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:39.698 [2024-11-20 16:06:40.284633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:39.698 [2024-11-20 16:06:40.284670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.698 [2024-11-20 16:06:40.284678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.698 [2024-11-20 16:06:40.284685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.698 [2024-11-20 16:06:40.284692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.698 [2024-11-20 16:06:40.284697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:39.698 [2024-11-20 16:06:40.284728] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:39.698 [2024-11-20 16:06:40.284732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:39.698 [2024-11-20 16:06:40.284811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284824] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:39.698 [2024-11-20 16:06:40.284828] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:39.698 [2024-11-20 16:06:40.284832] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.698 [2024-11-20 16:06:40.284837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:39.698 [2024-11-20 16:06:40.284860] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:39.698 [2024-11-20 16:06:40.284871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:39.698 [2024-11-20 16:06:40.284888] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.698 [2024-11-20 16:06:40.284891] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.698 [2024-11-20 16:06:40.284896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:39.698 [2024-11-20 16:06:40.284929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:39.698 [2024-11-20 16:06:40.284942] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:39.698 [2024-11-20 16:06:40.284950] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.698 [2024-11-20 16:06:40.284954] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.698 [2024-11-20 16:06:40.284960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.698 [2024-11-20 16:06:40.284972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.284980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.284985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.284992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.284998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.285004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.285009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.285013] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:39.699 [2024-11-20 16:06:40.285017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:39.699 [2024-11-20 16:06:40.285022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:39.699 [2024-11-20 16:06:40.285037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:39.699 [2024-11-20 16:06:40.285122] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:39.699 [2024-11-20 16:06:40.285125] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:39.699 [2024-11-20 16:06:40.285129] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:39.699 [2024-11-20 16:06:40.285132] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:39.699 [2024-11-20 16:06:40.285138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:39.699 [2024-11-20 16:06:40.285144] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:39.699 [2024-11-20 16:06:40.285148] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:39.699 [2024-11-20 16:06:40.285151] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.699 [2024-11-20 16:06:40.285156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285162] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:39.699 [2024-11-20 16:06:40.285166] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.699 [2024-11-20 16:06:40.285169] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.699 [2024-11-20 16:06:40.285174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285181] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:39.699 [2024-11-20 16:06:40.285186] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:39.699 [2024-11-20 16:06:40.285190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.699 [2024-11-20 16:06:40.285195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:39.699 [2024-11-20 16:06:40.285201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:39.699 [2024-11-20 16:06:40.285230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:39.699 ===================================================== 00:13:39.699 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.699 ===================================================== 00:13:39.699 Controller Capabilities/Features 00:13:39.699 ================================ 00:13:39.699 Vendor ID: 4e58 00:13:39.699 Subsystem Vendor ID: 4e58 00:13:39.699 Serial Number: SPDK1 00:13:39.699 Model Number: SPDK bdev Controller 00:13:39.699 Firmware Version: 25.01 00:13:39.699 Recommended Arb Burst: 6 00:13:39.699 IEEE OUI Identifier: 8d 6b 50 00:13:39.699 Multi-path I/O 00:13:39.699 May have multiple subsystem ports: Yes 00:13:39.699 May have multiple controllers: Yes 00:13:39.699 Associated with SR-IOV VF: No 00:13:39.699 Max Data Transfer Size: 131072 00:13:39.699 Max Number of Namespaces: 32 00:13:39.699 Max Number of I/O Queues: 127 00:13:39.699 NVMe Specification Version (VS): 1.3 00:13:39.699 NVMe Specification Version (Identify): 1.3 00:13:39.699 Maximum Queue Entries: 256 00:13:39.699 Contiguous Queues Required: Yes 00:13:39.699 Arbitration Mechanisms Supported 00:13:39.699 Weighted Round Robin: Not Supported 00:13:39.699 Vendor Specific: Not Supported 00:13:39.699 Reset Timeout: 15000 ms 00:13:39.699 Doorbell Stride: 4 bytes 00:13:39.699 NVM Subsystem Reset: Not Supported 00:13:39.699 Command Sets Supported 00:13:39.699 NVM Command Set: Supported 00:13:39.699 Boot Partition: Not Supported 00:13:39.699 Memory Page Size Minimum: 4096 bytes 00:13:39.699 Memory Page Size Maximum: 4096 bytes 00:13:39.699 Persistent Memory Region: Not Supported 00:13:39.699 Optional Asynchronous Events Supported 00:13:39.699 Namespace Attribute Notices: Supported 00:13:39.699 Firmware Activation Notices: Not Supported 00:13:39.699 ANA Change Notices: Not Supported 00:13:39.699 PLE Aggregate Log Change Notices: Not Supported 00:13:39.699 LBA Status Info Alert Notices: Not Supported 00:13:39.699 EGE Aggregate Log Change Notices: Not Supported 00:13:39.699 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.699 Zone Descriptor Change Notices: Not Supported 00:13:39.699 Discovery Log Change Notices: Not Supported 00:13:39.699 Controller Attributes 00:13:39.699 128-bit Host Identifier: Supported 00:13:39.699 Non-Operational Permissive Mode: Not Supported 00:13:39.699 NVM Sets: Not Supported 00:13:39.699 Read Recovery Levels: Not Supported 00:13:39.699 Endurance Groups: Not Supported 00:13:39.699 Predictable Latency Mode: Not Supported 00:13:39.699 Traffic Based Keep ALive: Not Supported 00:13:39.699 Namespace Granularity: Not Supported 00:13:39.699 SQ Associations: Not Supported 00:13:39.699 UUID List: Not Supported 00:13:39.699 Multi-Domain Subsystem: Not Supported 00:13:39.699 Fixed Capacity Management: Not Supported 00:13:39.699 Variable Capacity Management: Not Supported 00:13:39.699 Delete Endurance Group: Not Supported 00:13:39.699 Delete NVM Set: Not Supported 00:13:39.699 Extended LBA Formats Supported: Not Supported 00:13:39.699 Flexible Data Placement Supported: Not Supported 00:13:39.699 00:13:39.699 Controller Memory Buffer Support 00:13:39.700 ================================ 00:13:39.700 Supported: No 00:13:39.700 00:13:39.700 Persistent Memory Region Support 00:13:39.700 ================================ 00:13:39.700 Supported: No 00:13:39.700 00:13:39.700 Admin Command Set Attributes 00:13:39.700 ============================ 00:13:39.700 Security Send/Receive: Not Supported 00:13:39.700 Format NVM: Not Supported 00:13:39.700 Firmware Activate/Download: Not Supported 00:13:39.700 Namespace Management: Not Supported 00:13:39.700 Device Self-Test: Not Supported 00:13:39.700 Directives: Not Supported 00:13:39.700 NVMe-MI: Not Supported 00:13:39.700 Virtualization Management: Not Supported 00:13:39.700 Doorbell Buffer Config: Not Supported 00:13:39.700 Get LBA Status Capability: Not Supported 00:13:39.700 Command & Feature Lockdown Capability: Not Supported 00:13:39.700 Abort Command Limit: 4 00:13:39.700 Async Event Request Limit: 4 00:13:39.700 Number of Firmware Slots: N/A 00:13:39.700 Firmware Slot 1 Read-Only: N/A 00:13:39.700 Firmware Activation Without Reset: N/A 00:13:39.700 Multiple Update Detection Support: N/A 00:13:39.700 Firmware Update Granularity: No Information Provided 00:13:39.700 Per-Namespace SMART Log: No 00:13:39.700 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.700 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:39.700 Command Effects Log Page: Supported 00:13:39.700 Get Log Page Extended Data: Supported 00:13:39.700 Telemetry Log Pages: Not Supported 00:13:39.700 Persistent Event Log Pages: Not Supported 00:13:39.700 Supported Log Pages Log Page: May Support 00:13:39.700 Commands Supported & Effects Log Page: Not Supported 00:13:39.700 Feature Identifiers & Effects Log Page:May Support 00:13:39.700 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.700 Data Area 4 for Telemetry Log: Not Supported 00:13:39.700 Error Log Page Entries Supported: 128 00:13:39.700 Keep Alive: Supported 00:13:39.700 Keep Alive Granularity: 10000 ms 00:13:39.700 00:13:39.700 NVM Command Set Attributes 00:13:39.700 ========================== 00:13:39.700 Submission Queue Entry Size 00:13:39.700 Max: 64 00:13:39.700 Min: 64 00:13:39.700 Completion Queue Entry Size 00:13:39.700 Max: 16 00:13:39.700 Min: 16 00:13:39.700 Number of Namespaces: 32 00:13:39.700 Compare Command: Supported 00:13:39.700 Write Uncorrectable Command: Not Supported 00:13:39.700 Dataset Management Command: Supported 00:13:39.700 Write Zeroes Command: Supported 00:13:39.700 Set Features Save Field: Not Supported 00:13:39.700 Reservations: Not Supported 00:13:39.700 Timestamp: Not Supported 00:13:39.700 Copy: Supported 00:13:39.700 Volatile Write Cache: Present 00:13:39.700 Atomic Write Unit (Normal): 1 00:13:39.700 Atomic Write Unit (PFail): 1 00:13:39.700 Atomic Compare & Write Unit: 1 00:13:39.700 Fused Compare & Write: Supported 00:13:39.700 Scatter-Gather List 00:13:39.700 SGL Command Set: Supported (Dword aligned) 00:13:39.700 SGL Keyed: Not Supported 00:13:39.700 SGL Bit Bucket Descriptor: Not Supported 00:13:39.700 SGL Metadata Pointer: Not Supported 00:13:39.700 Oversized SGL: Not Supported 00:13:39.700 SGL Metadata Address: Not Supported 00:13:39.700 SGL Offset: Not Supported 00:13:39.700 Transport SGL Data Block: Not Supported 00:13:39.700 Replay Protected Memory Block: Not Supported 00:13:39.700 00:13:39.700 Firmware Slot Information 00:13:39.700 ========================= 00:13:39.700 Active slot: 1 00:13:39.700 Slot 1 Firmware Revision: 25.01 00:13:39.700 00:13:39.700 00:13:39.700 Commands Supported and Effects 00:13:39.700 ============================== 00:13:39.700 Admin Commands 00:13:39.700 -------------- 00:13:39.700 Get Log Page (02h): Supported 00:13:39.700 Identify (06h): Supported 00:13:39.700 Abort (08h): Supported 00:13:39.700 Set Features (09h): Supported 00:13:39.700 Get Features (0Ah): Supported 00:13:39.700 Asynchronous Event Request (0Ch): Supported 00:13:39.700 Keep Alive (18h): Supported 00:13:39.700 I/O Commands 00:13:39.700 ------------ 00:13:39.700 Flush (00h): Supported LBA-Change 00:13:39.700 Write (01h): Supported LBA-Change 00:13:39.700 Read (02h): Supported 00:13:39.700 Compare (05h): Supported 00:13:39.700 Write Zeroes (08h): Supported LBA-Change 00:13:39.700 Dataset Management (09h): Supported LBA-Change 00:13:39.700 Copy (19h): Supported LBA-Change 00:13:39.700 00:13:39.700 Error Log 00:13:39.700 ========= 00:13:39.700 00:13:39.700 Arbitration 00:13:39.700 =========== 00:13:39.700 Arbitration Burst: 1 00:13:39.700 00:13:39.700 Power Management 00:13:39.700 ================ 00:13:39.700 Number of Power States: 1 00:13:39.700 Current Power State: Power State #0 00:13:39.700 Power State #0: 00:13:39.700 Max Power: 0.00 W 00:13:39.700 Non-Operational State: Operational 00:13:39.700 Entry Latency: Not Reported 00:13:39.700 Exit Latency: Not Reported 00:13:39.700 Relative Read Throughput: 0 00:13:39.700 Relative Read Latency: 0 00:13:39.700 Relative Write Throughput: 0 00:13:39.700 Relative Write Latency: 0 00:13:39.700 Idle Power: Not Reported 00:13:39.700 Active Power: Not Reported 00:13:39.700 Non-Operational Permissive Mode: Not Supported 00:13:39.700 00:13:39.700 Health Information 00:13:39.700 ================== 00:13:39.700 Critical Warnings: 00:13:39.700 Available Spare Space: OK 00:13:39.700 Temperature: OK 00:13:39.700 Device Reliability: OK 00:13:39.700 Read Only: No 00:13:39.700 Volatile Memory Backup: OK 00:13:39.700 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:39.700 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:39.700 Available Spare: 0% 00:13:39.700 Available Sp[2024-11-20 16:06:40.285317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:39.700 [2024-11-20 16:06:40.285326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:39.700 [2024-11-20 16:06:40.285350] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:39.700 [2024-11-20 16:06:40.285359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.700 [2024-11-20 16:06:40.285365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.700 [2024-11-20 16:06:40.285370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.700 [2024-11-20 16:06:40.285376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.700 [2024-11-20 16:06:40.288955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:39.700 [2024-11-20 16:06:40.288967] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:39.700 [2024-11-20 16:06:40.289522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.700 [2024-11-20 16:06:40.289575] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:39.700 [2024-11-20 16:06:40.289581] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:39.700 [2024-11-20 16:06:40.290524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:39.700 [2024-11-20 16:06:40.290534] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:39.700 [2024-11-20 16:06:40.290582] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:39.700 [2024-11-20 16:06:40.292557] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:39.700 are Threshold: 0% 00:13:39.700 Life Percentage Used: 0% 00:13:39.700 Data Units Read: 0 00:13:39.700 Data Units Written: 0 00:13:39.701 Host Read Commands: 0 00:13:39.701 Host Write Commands: 0 00:13:39.701 Controller Busy Time: 0 minutes 00:13:39.701 Power Cycles: 0 00:13:39.701 Power On Hours: 0 hours 00:13:39.701 Unsafe Shutdowns: 0 00:13:39.701 Unrecoverable Media Errors: 0 00:13:39.701 Lifetime Error Log Entries: 0 00:13:39.701 Warning Temperature Time: 0 minutes 00:13:39.701 Critical Temperature Time: 0 minutes 00:13:39.701 00:13:39.701 Number of Queues 00:13:39.701 ================ 00:13:39.701 Number of I/O Submission Queues: 127 00:13:39.701 Number of I/O Completion Queues: 127 00:13:39.701 00:13:39.701 Active Namespaces 00:13:39.701 ================= 00:13:39.701 Namespace ID:1 00:13:39.701 Error Recovery Timeout: Unlimited 00:13:39.701 Command Set Identifier: NVM (00h) 00:13:39.701 Deallocate: Supported 00:13:39.701 Deallocated/Unwritten Error: Not Supported 00:13:39.701 Deallocated Read Value: Unknown 00:13:39.701 Deallocate in Write Zeroes: Not Supported 00:13:39.701 Deallocated Guard Field: 0xFFFF 00:13:39.701 Flush: Supported 00:13:39.701 Reservation: Supported 00:13:39.701 Namespace Sharing Capabilities: Multiple Controllers 00:13:39.701 Size (in LBAs): 131072 (0GiB) 00:13:39.701 Capacity (in LBAs): 131072 (0GiB) 00:13:39.701 Utilization (in LBAs): 131072 (0GiB) 00:13:39.701 NGUID: 04706D8CD2554872839431E3017A19CB 00:13:39.701 UUID: 04706d8c-d255-4872-8394-31e3017a19cb 00:13:39.701 Thin Provisioning: Not Supported 00:13:39.701 Per-NS Atomic Units: Yes 00:13:39.701 Atomic Boundary Size (Normal): 0 00:13:39.701 Atomic Boundary Size (PFail): 0 00:13:39.701 Atomic Boundary Offset: 0 00:13:39.701 Maximum Single Source Range Length: 65535 00:13:39.701 Maximum Copy Length: 65535 00:13:39.701 Maximum Source Range Count: 1 00:13:39.701 NGUID/EUI64 Never Reused: No 00:13:39.701 Namespace Write Protected: No 00:13:39.701 Number of LBA Formats: 1 00:13:39.701 Current LBA Format: LBA Format #00 00:13:39.701 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.701 00:13:39.701 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:39.701 [2024-11-20 16:06:40.530806] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.069 Initializing NVMe Controllers 00:13:45.069 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:45.069 Initialization complete. Launching workers. 00:13:45.069 ======================================================== 00:13:45.069 Latency(us) 00:13:45.069 Device Information : IOPS MiB/s Average min max 00:13:45.069 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39978.36 156.17 3202.16 972.88 6665.05 00:13:45.069 ======================================================== 00:13:45.069 Total : 39978.36 156.17 3202.16 972.88 6665.05 00:13:45.069 00:13:45.069 [2024-11-20 16:06:45.556077] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.069 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:45.069 [2024-11-20 16:06:45.792194] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.341 Initializing NVMe Controllers 00:13:50.341 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.341 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:50.341 Initialization complete. Launching workers. 00:13:50.341 ======================================================== 00:13:50.341 Latency(us) 00:13:50.341 Device Information : IOPS MiB/s Average min max 00:13:50.341 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16028.69 62.61 7991.02 6974.29 11973.74 00:13:50.341 ======================================================== 00:13:50.341 Total : 16028.69 62.61 7991.02 6974.29 11973.74 00:13:50.341 00:13:50.341 [2024-11-20 16:06:50.837438] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.341 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:50.341 [2024-11-20 16:06:51.051415] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:55.615 [2024-11-20 16:06:56.160403] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:55.615 Initializing NVMe Controllers 00:13:55.615 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:55.615 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:55.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:55.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:55.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:55.615 Initialization complete. Launching workers. 00:13:55.615 Starting thread on core 2 00:13:55.615 Starting thread on core 3 00:13:55.615 Starting thread on core 1 00:13:55.615 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:55.874 [2024-11-20 16:06:56.464405] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.165 [2024-11-20 16:06:59.528174] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.165 Initializing NVMe Controllers 00:13:59.165 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.165 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:59.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:59.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:59.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:59.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:59.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:59.165 Initialization complete. Launching workers. 00:13:59.165 Starting thread on core 1 with urgent priority queue 00:13:59.165 Starting thread on core 2 with urgent priority queue 00:13:59.165 Starting thread on core 3 with urgent priority queue 00:13:59.165 Starting thread on core 0 with urgent priority queue 00:13:59.165 SPDK bdev Controller (SPDK1 ) core 0: 9043.67 IO/s 11.06 secs/100000 ios 00:13:59.165 SPDK bdev Controller (SPDK1 ) core 1: 9008.67 IO/s 11.10 secs/100000 ios 00:13:59.165 SPDK bdev Controller (SPDK1 ) core 2: 9077.33 IO/s 11.02 secs/100000 ios 00:13:59.165 SPDK bdev Controller (SPDK1 ) core 3: 7336.00 IO/s 13.63 secs/100000 ios 00:13:59.165 ======================================================== 00:13:59.165 00:13:59.165 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:59.165 [2024-11-20 16:06:59.820326] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.165 Initializing NVMe Controllers 00:13:59.165 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.165 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.165 Namespace ID: 1 size: 0GB 00:13:59.165 Initialization complete. 00:13:59.165 INFO: using host memory buffer for IO 00:13:59.165 Hello world! 00:13:59.165 [2024-11-20 16:06:59.854615] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.165 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:59.423 [2024-11-20 16:07:00.133818] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.361 Initializing NVMe Controllers 00:14:00.361 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:00.361 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:00.361 Initialization complete. Launching workers. 00:14:00.361 submit (in ns) avg, min, max = 7162.1, 3265.2, 4996573.9 00:14:00.361 complete (in ns) avg, min, max = 20408.6, 1764.3, 4025146.1 00:14:00.361 00:14:00.361 Submit histogram 00:14:00.361 ================ 00:14:00.361 Range in us Cumulative Count 00:14:00.361 3.256 - 3.270: 0.0060% ( 1) 00:14:00.361 3.270 - 3.283: 0.0181% ( 2) 00:14:00.361 3.283 - 3.297: 0.0482% ( 5) 00:14:00.361 3.297 - 3.311: 0.1385% ( 15) 00:14:00.361 3.311 - 3.325: 0.4036% ( 44) 00:14:00.361 3.325 - 3.339: 1.6083% ( 200) 00:14:00.361 3.339 - 3.353: 5.4756% ( 642) 00:14:00.361 3.353 - 3.367: 10.9210% ( 904) 00:14:00.361 3.367 - 3.381: 17.3002% ( 1059) 00:14:00.361 3.381 - 3.395: 23.7937% ( 1078) 00:14:00.361 3.395 - 3.409: 30.0645% ( 1041) 00:14:00.361 3.409 - 3.423: 35.3473% ( 877) 00:14:00.361 3.423 - 3.437: 41.1481% ( 963) 00:14:00.361 3.437 - 3.450: 45.8105% ( 774) 00:14:00.361 3.450 - 3.464: 49.9849% ( 693) 00:14:00.361 3.464 - 3.478: 54.2076% ( 701) 00:14:00.361 3.478 - 3.492: 60.4000% ( 1028) 00:14:00.361 3.492 - 3.506: 67.4056% ( 1163) 00:14:00.361 3.506 - 3.520: 71.7668% ( 724) 00:14:00.361 3.520 - 3.534: 76.8387% ( 842) 00:14:00.361 3.534 - 3.548: 81.7119% ( 809) 00:14:00.361 3.548 - 3.562: 84.8985% ( 529) 00:14:00.361 3.562 - 3.590: 87.4827% ( 429) 00:14:00.361 3.590 - 3.617: 88.1634% ( 113) 00:14:00.361 3.617 - 3.645: 89.0127% ( 141) 00:14:00.361 3.645 - 3.673: 90.5729% ( 259) 00:14:00.361 3.673 - 3.701: 92.3318% ( 292) 00:14:00.361 3.701 - 3.729: 94.0064% ( 278) 00:14:00.361 3.729 - 3.757: 95.7231% ( 285) 00:14:00.361 3.757 - 3.784: 97.2050% ( 246) 00:14:00.361 3.784 - 3.812: 98.3314% ( 187) 00:14:00.361 3.812 - 3.840: 98.9218% ( 98) 00:14:00.361 3.840 - 3.868: 99.2832% ( 60) 00:14:00.361 3.868 - 3.896: 99.4880% ( 34) 00:14:00.361 3.896 - 3.923: 99.5542% ( 11) 00:14:00.361 3.923 - 3.951: 99.5663% ( 2) 00:14:00.361 3.951 - 3.979: 99.5723% ( 1) 00:14:00.361 5.370 - 5.398: 99.5783% ( 1) 00:14:00.361 5.482 - 5.510: 99.5844% ( 1) 00:14:00.361 5.510 - 5.537: 99.5904% ( 1) 00:14:00.361 5.537 - 5.565: 99.6024% ( 2) 00:14:00.361 5.565 - 5.593: 99.6085% ( 1) 00:14:00.361 5.593 - 5.621: 99.6145% ( 1) 00:14:00.361 5.649 - 5.677: 99.6205% ( 1) 00:14:00.361 5.732 - 5.760: 99.6265% ( 1) 00:14:00.361 5.816 - 5.843: 99.6326% ( 1) 00:14:00.361 5.927 - 5.955: 99.6386% ( 1) 00:14:00.361 5.955 - 5.983: 99.6446% ( 1) 00:14:00.361 5.983 - 6.010: 99.6506% ( 1) 00:14:00.361 6.066 - 6.094: 99.6566% ( 1) 00:14:00.361 6.094 - 6.122: 99.6627% ( 1) 00:14:00.361 6.150 - 6.177: 99.6687% ( 1) 00:14:00.361 6.261 - 6.289: 99.6747% ( 1) 00:14:00.361 6.289 - 6.317: 99.6807% ( 1) 00:14:00.361 6.344 - 6.372: 99.6868% ( 1) 00:14:00.361 6.428 - 6.456: 99.6928% ( 1) 00:14:00.361 6.511 - 6.539: 99.6988% ( 1) 00:14:00.361 6.539 - 6.567: 99.7048% ( 1) 00:14:00.361 6.595 - 6.623: 99.7109% ( 1) 00:14:00.361 6.623 - 6.650: 99.7169% ( 1) 00:14:00.361 6.650 - 6.678: 99.7229% ( 1) 00:14:00.361 6.706 - 6.734: 99.7289% ( 1) 00:14:00.361 6.762 - 6.790: 99.7350% ( 1) 00:14:00.361 6.845 - 6.873: 99.7410% ( 1) 00:14:00.361 6.873 - 6.901: 99.7530% ( 2) 00:14:00.361 6.929 - 6.957: 99.7591% ( 1) 00:14:00.361 6.957 - 6.984: 99.7771% ( 3) 00:14:00.361 6.984 - 7.012: 99.7892% ( 2) 00:14:00.361 7.012 - 7.040: 99.7952% ( 1) 00:14:00.361 7.123 - 7.179: 99.8012% ( 1) 00:14:00.361 7.179 - 7.235: 99.8072% ( 1) 00:14:00.361 7.346 - 7.402: 99.8193% ( 2) 00:14:00.361 7.457 - 7.513: 99.8374% ( 3) 00:14:00.361 7.569 - 7.624: 99.8434% ( 1) 00:14:00.361 7.680 - 7.736: 99.8615% ( 3) 00:14:00.361 7.736 - 7.791: 99.8675% ( 1) 00:14:00.361 7.791 - 7.847: 99.8735% ( 1) 00:14:00.361 8.070 - 8.125: 99.8795% ( 1) 00:14:00.361 8.125 - 8.181: 99.8976% ( 3) 00:14:00.361 [2024-11-20 16:07:01.153863] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.361 8.849 - 8.904: 99.9036% ( 1) 00:14:00.361 9.850 - 9.906: 99.9096% ( 1) 00:14:00.361 3989.148 - 4017.642: 99.9940% ( 14) 00:14:00.361 4986.435 - 5014.929: 100.0000% ( 1) 00:14:00.361 00:14:00.361 Complete histogram 00:14:00.361 ================== 00:14:00.361 Range in us Cumulative Count 00:14:00.361 1.760 - 1.767: 0.0060% ( 1) 00:14:00.361 1.767 - 1.774: 0.0422% ( 6) 00:14:00.361 1.774 - 1.781: 0.0843% ( 7) 00:14:00.361 1.781 - 1.795: 0.1024% ( 3) 00:14:00.361 1.795 - 1.809: 0.1205% ( 3) 00:14:00.361 1.809 - 1.823: 2.6083% ( 413) 00:14:00.361 1.823 - 1.837: 17.1375% ( 2412) 00:14:00.362 1.837 - 1.850: 20.8301% ( 613) 00:14:00.362 1.850 - 1.864: 25.1973% ( 725) 00:14:00.362 1.864 - 1.878: 66.5382% ( 6863) 00:14:00.362 1.878 - 1.892: 88.9163% ( 3715) 00:14:00.362 1.892 - 1.906: 93.7895% ( 809) 00:14:00.362 1.906 - 1.920: 95.6087% ( 302) 00:14:00.362 1.920 - 1.934: 96.1749% ( 94) 00:14:00.362 1.934 - 1.948: 97.6447% ( 244) 00:14:00.362 1.948 - 1.962: 98.9940% ( 224) 00:14:00.362 1.962 - 1.976: 99.3073% ( 52) 00:14:00.362 1.976 - 1.990: 99.3555% ( 8) 00:14:00.362 1.990 - 2.003: 99.3856% ( 5) 00:14:00.362 2.003 - 2.017: 99.3916% ( 1) 00:14:00.362 3.492 - 3.506: 99.3976% ( 1) 00:14:00.362 4.174 - 4.202: 99.4097% ( 2) 00:14:00.362 4.313 - 4.341: 99.4157% ( 1) 00:14:00.362 4.619 - 4.647: 99.4217% ( 1) 00:14:00.362 4.647 - 4.675: 99.4277% ( 1) 00:14:00.362 4.786 - 4.814: 99.4338% ( 1) 00:14:00.362 4.814 - 4.842: 99.4398% ( 1) 00:14:00.362 4.925 - 4.953: 99.4458% ( 1) 00:14:00.362 5.037 - 5.064: 99.4518% ( 1) 00:14:00.362 5.064 - 5.092: 99.4579% ( 1) 00:14:00.362 5.092 - 5.120: 99.4639% ( 1) 00:14:00.362 5.259 - 5.287: 99.4759% ( 2) 00:14:00.362 5.315 - 5.343: 99.4820% ( 1) 00:14:00.362 5.343 - 5.370: 99.5000% ( 3) 00:14:00.362 5.510 - 5.537: 99.5061% ( 1) 00:14:00.362 5.760 - 5.788: 99.5121% ( 1) 00:14:00.362 5.843 - 5.871: 99.5181% ( 1) 00:14:00.362 6.706 - 6.734: 99.5241% ( 1) 00:14:00.362 9.795 - 9.850: 99.5301% ( 1) 00:14:00.362 39.402 - 39.624: 99.5362% ( 1) 00:14:00.362 3989.148 - 4017.642: 99.9940% ( 76) 00:14:00.362 4017.642 - 4046.136: 100.0000% ( 1) 00:14:00.362 00:14:00.621 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:00.621 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:00.621 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:00.621 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:00.621 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:00.621 [ 00:14:00.621 { 00:14:00.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:00.621 "subtype": "Discovery", 00:14:00.621 "listen_addresses": [], 00:14:00.621 "allow_any_host": true, 00:14:00.621 "hosts": [] 00:14:00.621 }, 00:14:00.621 { 00:14:00.621 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:00.621 "subtype": "NVMe", 00:14:00.621 "listen_addresses": [ 00:14:00.621 { 00:14:00.621 "trtype": "VFIOUSER", 00:14:00.621 "adrfam": "IPv4", 00:14:00.621 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:00.621 "trsvcid": "0" 00:14:00.621 } 00:14:00.621 ], 00:14:00.621 "allow_any_host": true, 00:14:00.621 "hosts": [], 00:14:00.621 "serial_number": "SPDK1", 00:14:00.621 "model_number": "SPDK bdev Controller", 00:14:00.621 "max_namespaces": 32, 00:14:00.621 "min_cntlid": 1, 00:14:00.621 "max_cntlid": 65519, 00:14:00.621 "namespaces": [ 00:14:00.621 { 00:14:00.621 "nsid": 1, 00:14:00.621 "bdev_name": "Malloc1", 00:14:00.621 "name": "Malloc1", 00:14:00.621 "nguid": "04706D8CD2554872839431E3017A19CB", 00:14:00.621 "uuid": "04706d8c-d255-4872-8394-31e3017a19cb" 00:14:00.621 } 00:14:00.621 ] 00:14:00.621 }, 00:14:00.621 { 00:14:00.622 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:00.622 "subtype": "NVMe", 00:14:00.622 "listen_addresses": [ 00:14:00.622 { 00:14:00.622 "trtype": "VFIOUSER", 00:14:00.622 "adrfam": "IPv4", 00:14:00.622 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:00.622 "trsvcid": "0" 00:14:00.622 } 00:14:00.622 ], 00:14:00.622 "allow_any_host": true, 00:14:00.622 "hosts": [], 00:14:00.622 "serial_number": "SPDK2", 00:14:00.622 "model_number": "SPDK bdev Controller", 00:14:00.622 "max_namespaces": 32, 00:14:00.622 "min_cntlid": 1, 00:14:00.622 "max_cntlid": 65519, 00:14:00.622 "namespaces": [ 00:14:00.622 { 00:14:00.622 "nsid": 1, 00:14:00.622 "bdev_name": "Malloc2", 00:14:00.622 "name": "Malloc2", 00:14:00.622 "nguid": "966903D33F6A47BFACDEFC7CB99A6624", 00:14:00.622 "uuid": "966903d3-3f6a-47bf-acde-fc7cb99a6624" 00:14:00.622 } 00:14:00.622 ] 00:14:00.622 } 00:14:00.622 ] 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2701719 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:00.622 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:00.882 [2024-11-20 16:07:01.548355] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.882 Malloc3 00:14:00.882 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:01.141 [2024-11-20 16:07:01.783182] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.142 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:01.142 Asynchronous Event Request test 00:14:01.142 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.142 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.142 Registering asynchronous event callbacks... 00:14:01.142 Starting namespace attribute notice tests for all controllers... 00:14:01.142 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:01.142 aer_cb - Changed Namespace 00:14:01.142 Cleaning up... 00:14:01.142 [ 00:14:01.142 { 00:14:01.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.142 "subtype": "Discovery", 00:14:01.142 "listen_addresses": [], 00:14:01.142 "allow_any_host": true, 00:14:01.142 "hosts": [] 00:14:01.142 }, 00:14:01.142 { 00:14:01.142 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:01.142 "subtype": "NVMe", 00:14:01.142 "listen_addresses": [ 00:14:01.142 { 00:14:01.142 "trtype": "VFIOUSER", 00:14:01.142 "adrfam": "IPv4", 00:14:01.142 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:01.142 "trsvcid": "0" 00:14:01.142 } 00:14:01.142 ], 00:14:01.142 "allow_any_host": true, 00:14:01.142 "hosts": [], 00:14:01.142 "serial_number": "SPDK1", 00:14:01.142 "model_number": "SPDK bdev Controller", 00:14:01.142 "max_namespaces": 32, 00:14:01.142 "min_cntlid": 1, 00:14:01.142 "max_cntlid": 65519, 00:14:01.142 "namespaces": [ 00:14:01.142 { 00:14:01.142 "nsid": 1, 00:14:01.142 "bdev_name": "Malloc1", 00:14:01.142 "name": "Malloc1", 00:14:01.142 "nguid": "04706D8CD2554872839431E3017A19CB", 00:14:01.142 "uuid": "04706d8c-d255-4872-8394-31e3017a19cb" 00:14:01.142 }, 00:14:01.142 { 00:14:01.142 "nsid": 2, 00:14:01.142 "bdev_name": "Malloc3", 00:14:01.142 "name": "Malloc3", 00:14:01.142 "nguid": "A2EE4558D47346D3B45505EA565FED91", 00:14:01.142 "uuid": "a2ee4558-d473-46d3-b455-05ea565fed91" 00:14:01.142 } 00:14:01.142 ] 00:14:01.142 }, 00:14:01.142 { 00:14:01.142 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:01.142 "subtype": "NVMe", 00:14:01.142 "listen_addresses": [ 00:14:01.142 { 00:14:01.142 "trtype": "VFIOUSER", 00:14:01.142 "adrfam": "IPv4", 00:14:01.142 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:01.142 "trsvcid": "0" 00:14:01.142 } 00:14:01.142 ], 00:14:01.142 "allow_any_host": true, 00:14:01.142 "hosts": [], 00:14:01.142 "serial_number": "SPDK2", 00:14:01.142 "model_number": "SPDK bdev Controller", 00:14:01.142 "max_namespaces": 32, 00:14:01.142 "min_cntlid": 1, 00:14:01.142 "max_cntlid": 65519, 00:14:01.142 "namespaces": [ 00:14:01.142 { 00:14:01.142 "nsid": 1, 00:14:01.142 "bdev_name": "Malloc2", 00:14:01.142 "name": "Malloc2", 00:14:01.142 "nguid": "966903D33F6A47BFACDEFC7CB99A6624", 00:14:01.142 "uuid": "966903d3-3f6a-47bf-acde-fc7cb99a6624" 00:14:01.142 } 00:14:01.142 ] 00:14:01.142 } 00:14:01.142 ] 00:14:01.402 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2701719 00:14:01.402 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.402 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:01.402 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:01.402 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:01.402 [2024-11-20 16:07:02.026814] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:01.402 [2024-11-20 16:07:02.026847] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2701732 ] 00:14:01.402 [2024-11-20 16:07:02.067740] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:01.403 [2024-11-20 16:07:02.071986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:01.403 [2024-11-20 16:07:02.072011] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f700e33a000 00:14:01.403 [2024-11-20 16:07:02.072988] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.073987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.074992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.076000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.077009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.078016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.079028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.080040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.403 [2024-11-20 16:07:02.081054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:01.403 [2024-11-20 16:07:02.081067] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f700e32f000 00:14:01.403 [2024-11-20 16:07:02.082009] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:01.403 [2024-11-20 16:07:02.091527] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:01.403 [2024-11-20 16:07:02.091552] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:01.403 [2024-11-20 16:07:02.096625] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:01.403 [2024-11-20 16:07:02.096663] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:01.403 [2024-11-20 16:07:02.096732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:01.403 [2024-11-20 16:07:02.096745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:01.403 [2024-11-20 16:07:02.096749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:01.403 [2024-11-20 16:07:02.097627] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:01.403 [2024-11-20 16:07:02.097638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:01.403 [2024-11-20 16:07:02.097644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:01.403 [2024-11-20 16:07:02.098634] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:01.403 [2024-11-20 16:07:02.098643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:01.403 [2024-11-20 16:07:02.098649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:01.403 [2024-11-20 16:07:02.099645] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:01.403 [2024-11-20 16:07:02.099653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:01.403 [2024-11-20 16:07:02.100651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:01.403 [2024-11-20 16:07:02.100660] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:01.403 [2024-11-20 16:07:02.100664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:01.403 [2024-11-20 16:07:02.100671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:01.403 [2024-11-20 16:07:02.100778] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:01.403 [2024-11-20 16:07:02.100783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:01.403 [2024-11-20 16:07:02.100787] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:01.403 [2024-11-20 16:07:02.101666] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:01.403 [2024-11-20 16:07:02.102669] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:01.403 [2024-11-20 16:07:02.103681] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:01.403 [2024-11-20 16:07:02.104681] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.403 [2024-11-20 16:07:02.104719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:01.403 [2024-11-20 16:07:02.105696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:01.403 [2024-11-20 16:07:02.105705] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:01.403 [2024-11-20 16:07:02.105709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:01.403 [2024-11-20 16:07:02.105726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:01.403 [2024-11-20 16:07:02.105736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:01.403 [2024-11-20 16:07:02.105748] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:01.403 [2024-11-20 16:07:02.105752] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.403 [2024-11-20 16:07:02.105756] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.403 [2024-11-20 16:07:02.105767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.403 [2024-11-20 16:07:02.112955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:01.403 [2024-11-20 16:07:02.112966] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:01.403 [2024-11-20 16:07:02.112971] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:01.403 [2024-11-20 16:07:02.112975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:01.403 [2024-11-20 16:07:02.112979] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:01.403 [2024-11-20 16:07:02.112986] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:01.403 [2024-11-20 16:07:02.112990] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:01.403 [2024-11-20 16:07:02.112994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:01.403 [2024-11-20 16:07:02.113002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:01.403 [2024-11-20 16:07:02.113012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:01.403 [2024-11-20 16:07:02.120954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:01.403 [2024-11-20 16:07:02.120966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.403 [2024-11-20 16:07:02.120976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.403 [2024-11-20 16:07:02.120983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.403 [2024-11-20 16:07:02.120991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.403 [2024-11-20 16:07:02.120995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:01.403 [2024-11-20 16:07:02.121001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:01.403 [2024-11-20 16:07:02.121009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:01.403 [2024-11-20 16:07:02.128954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:01.403 [2024-11-20 16:07:02.128965] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:01.403 [2024-11-20 16:07:02.128970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.128976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.128981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.128990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.136963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.137019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.137027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.137034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:01.404 [2024-11-20 16:07:02.137038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:01.404 [2024-11-20 16:07:02.137042] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.404 [2024-11-20 16:07:02.137048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.144953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.144965] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:01.404 [2024-11-20 16:07:02.144975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.144982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.144988] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:01.404 [2024-11-20 16:07:02.144992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.404 [2024-11-20 16:07:02.144996] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.404 [2024-11-20 16:07:02.145003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.152953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.152968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.152975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.152982] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:01.404 [2024-11-20 16:07:02.152986] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.404 [2024-11-20 16:07:02.152990] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.404 [2024-11-20 16:07:02.152995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.160955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.160965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.160971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.160978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.160984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.160989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.160993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.160998] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:01.404 [2024-11-20 16:07:02.161003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:01.404 [2024-11-20 16:07:02.161007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:01.404 [2024-11-20 16:07:02.161023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.168954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.168967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.176953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.176966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.184955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.184967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.192954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.192969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:01.404 [2024-11-20 16:07:02.192974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:01.404 [2024-11-20 16:07:02.192977] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:01.404 [2024-11-20 16:07:02.192980] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:01.404 [2024-11-20 16:07:02.192983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:01.404 [2024-11-20 16:07:02.192989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:01.404 [2024-11-20 16:07:02.192996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:01.404 [2024-11-20 16:07:02.193000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:01.404 [2024-11-20 16:07:02.193003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.404 [2024-11-20 16:07:02.193008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.193014] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:01.404 [2024-11-20 16:07:02.193018] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.404 [2024-11-20 16:07:02.193021] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.404 [2024-11-20 16:07:02.193027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.193033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:01.404 [2024-11-20 16:07:02.193037] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:01.404 [2024-11-20 16:07:02.193040] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.404 [2024-11-20 16:07:02.193045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:01.404 [2024-11-20 16:07:02.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:01.404 [2024-11-20 16:07:02.200969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:01.405 [2024-11-20 16:07:02.200979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:01.405 [2024-11-20 16:07:02.200985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:01.405 ===================================================== 00:14:01.405 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.405 ===================================================== 00:14:01.405 Controller Capabilities/Features 00:14:01.405 ================================ 00:14:01.405 Vendor ID: 4e58 00:14:01.405 Subsystem Vendor ID: 4e58 00:14:01.405 Serial Number: SPDK2 00:14:01.405 Model Number: SPDK bdev Controller 00:14:01.405 Firmware Version: 25.01 00:14:01.405 Recommended Arb Burst: 6 00:14:01.405 IEEE OUI Identifier: 8d 6b 50 00:14:01.405 Multi-path I/O 00:14:01.405 May have multiple subsystem ports: Yes 00:14:01.405 May have multiple controllers: Yes 00:14:01.405 Associated with SR-IOV VF: No 00:14:01.405 Max Data Transfer Size: 131072 00:14:01.405 Max Number of Namespaces: 32 00:14:01.405 Max Number of I/O Queues: 127 00:14:01.405 NVMe Specification Version (VS): 1.3 00:14:01.405 NVMe Specification Version (Identify): 1.3 00:14:01.405 Maximum Queue Entries: 256 00:14:01.405 Contiguous Queues Required: Yes 00:14:01.405 Arbitration Mechanisms Supported 00:14:01.405 Weighted Round Robin: Not Supported 00:14:01.405 Vendor Specific: Not Supported 00:14:01.405 Reset Timeout: 15000 ms 00:14:01.405 Doorbell Stride: 4 bytes 00:14:01.405 NVM Subsystem Reset: Not Supported 00:14:01.405 Command Sets Supported 00:14:01.405 NVM Command Set: Supported 00:14:01.405 Boot Partition: Not Supported 00:14:01.405 Memory Page Size Minimum: 4096 bytes 00:14:01.405 Memory Page Size Maximum: 4096 bytes 00:14:01.405 Persistent Memory Region: Not Supported 00:14:01.405 Optional Asynchronous Events Supported 00:14:01.405 Namespace Attribute Notices: Supported 00:14:01.405 Firmware Activation Notices: Not Supported 00:14:01.405 ANA Change Notices: Not Supported 00:14:01.405 PLE Aggregate Log Change Notices: Not Supported 00:14:01.405 LBA Status Info Alert Notices: Not Supported 00:14:01.405 EGE Aggregate Log Change Notices: Not Supported 00:14:01.405 Normal NVM Subsystem Shutdown event: Not Supported 00:14:01.405 Zone Descriptor Change Notices: Not Supported 00:14:01.405 Discovery Log Change Notices: Not Supported 00:14:01.405 Controller Attributes 00:14:01.405 128-bit Host Identifier: Supported 00:14:01.405 Non-Operational Permissive Mode: Not Supported 00:14:01.405 NVM Sets: Not Supported 00:14:01.405 Read Recovery Levels: Not Supported 00:14:01.405 Endurance Groups: Not Supported 00:14:01.405 Predictable Latency Mode: Not Supported 00:14:01.405 Traffic Based Keep ALive: Not Supported 00:14:01.405 Namespace Granularity: Not Supported 00:14:01.405 SQ Associations: Not Supported 00:14:01.405 UUID List: Not Supported 00:14:01.405 Multi-Domain Subsystem: Not Supported 00:14:01.405 Fixed Capacity Management: Not Supported 00:14:01.405 Variable Capacity Management: Not Supported 00:14:01.405 Delete Endurance Group: Not Supported 00:14:01.405 Delete NVM Set: Not Supported 00:14:01.405 Extended LBA Formats Supported: Not Supported 00:14:01.405 Flexible Data Placement Supported: Not Supported 00:14:01.405 00:14:01.405 Controller Memory Buffer Support 00:14:01.405 ================================ 00:14:01.405 Supported: No 00:14:01.405 00:14:01.405 Persistent Memory Region Support 00:14:01.405 ================================ 00:14:01.405 Supported: No 00:14:01.405 00:14:01.405 Admin Command Set Attributes 00:14:01.405 ============================ 00:14:01.405 Security Send/Receive: Not Supported 00:14:01.405 Format NVM: Not Supported 00:14:01.405 Firmware Activate/Download: Not Supported 00:14:01.405 Namespace Management: Not Supported 00:14:01.405 Device Self-Test: Not Supported 00:14:01.405 Directives: Not Supported 00:14:01.405 NVMe-MI: Not Supported 00:14:01.405 Virtualization Management: Not Supported 00:14:01.405 Doorbell Buffer Config: Not Supported 00:14:01.405 Get LBA Status Capability: Not Supported 00:14:01.405 Command & Feature Lockdown Capability: Not Supported 00:14:01.405 Abort Command Limit: 4 00:14:01.405 Async Event Request Limit: 4 00:14:01.405 Number of Firmware Slots: N/A 00:14:01.405 Firmware Slot 1 Read-Only: N/A 00:14:01.405 Firmware Activation Without Reset: N/A 00:14:01.405 Multiple Update Detection Support: N/A 00:14:01.405 Firmware Update Granularity: No Information Provided 00:14:01.405 Per-Namespace SMART Log: No 00:14:01.405 Asymmetric Namespace Access Log Page: Not Supported 00:14:01.405 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:01.405 Command Effects Log Page: Supported 00:14:01.405 Get Log Page Extended Data: Supported 00:14:01.405 Telemetry Log Pages: Not Supported 00:14:01.405 Persistent Event Log Pages: Not Supported 00:14:01.405 Supported Log Pages Log Page: May Support 00:14:01.405 Commands Supported & Effects Log Page: Not Supported 00:14:01.405 Feature Identifiers & Effects Log Page:May Support 00:14:01.405 NVMe-MI Commands & Effects Log Page: May Support 00:14:01.405 Data Area 4 for Telemetry Log: Not Supported 00:14:01.405 Error Log Page Entries Supported: 128 00:14:01.405 Keep Alive: Supported 00:14:01.405 Keep Alive Granularity: 10000 ms 00:14:01.405 00:14:01.405 NVM Command Set Attributes 00:14:01.405 ========================== 00:14:01.405 Submission Queue Entry Size 00:14:01.405 Max: 64 00:14:01.405 Min: 64 00:14:01.405 Completion Queue Entry Size 00:14:01.405 Max: 16 00:14:01.405 Min: 16 00:14:01.405 Number of Namespaces: 32 00:14:01.405 Compare Command: Supported 00:14:01.405 Write Uncorrectable Command: Not Supported 00:14:01.405 Dataset Management Command: Supported 00:14:01.405 Write Zeroes Command: Supported 00:14:01.405 Set Features Save Field: Not Supported 00:14:01.405 Reservations: Not Supported 00:14:01.405 Timestamp: Not Supported 00:14:01.405 Copy: Supported 00:14:01.405 Volatile Write Cache: Present 00:14:01.405 Atomic Write Unit (Normal): 1 00:14:01.405 Atomic Write Unit (PFail): 1 00:14:01.405 Atomic Compare & Write Unit: 1 00:14:01.405 Fused Compare & Write: Supported 00:14:01.405 Scatter-Gather List 00:14:01.405 SGL Command Set: Supported (Dword aligned) 00:14:01.405 SGL Keyed: Not Supported 00:14:01.405 SGL Bit Bucket Descriptor: Not Supported 00:14:01.405 SGL Metadata Pointer: Not Supported 00:14:01.405 Oversized SGL: Not Supported 00:14:01.405 SGL Metadata Address: Not Supported 00:14:01.405 SGL Offset: Not Supported 00:14:01.405 Transport SGL Data Block: Not Supported 00:14:01.405 Replay Protected Memory Block: Not Supported 00:14:01.405 00:14:01.405 Firmware Slot Information 00:14:01.405 ========================= 00:14:01.405 Active slot: 1 00:14:01.405 Slot 1 Firmware Revision: 25.01 00:14:01.405 00:14:01.405 00:14:01.405 Commands Supported and Effects 00:14:01.405 ============================== 00:14:01.405 Admin Commands 00:14:01.405 -------------- 00:14:01.405 Get Log Page (02h): Supported 00:14:01.405 Identify (06h): Supported 00:14:01.405 Abort (08h): Supported 00:14:01.405 Set Features (09h): Supported 00:14:01.405 Get Features (0Ah): Supported 00:14:01.405 Asynchronous Event Request (0Ch): Supported 00:14:01.405 Keep Alive (18h): Supported 00:14:01.405 I/O Commands 00:14:01.405 ------------ 00:14:01.405 Flush (00h): Supported LBA-Change 00:14:01.405 Write (01h): Supported LBA-Change 00:14:01.405 Read (02h): Supported 00:14:01.405 Compare (05h): Supported 00:14:01.405 Write Zeroes (08h): Supported LBA-Change 00:14:01.405 Dataset Management (09h): Supported LBA-Change 00:14:01.405 Copy (19h): Supported LBA-Change 00:14:01.405 00:14:01.405 Error Log 00:14:01.405 ========= 00:14:01.405 00:14:01.405 Arbitration 00:14:01.405 =========== 00:14:01.405 Arbitration Burst: 1 00:14:01.405 00:14:01.405 Power Management 00:14:01.405 ================ 00:14:01.405 Number of Power States: 1 00:14:01.405 Current Power State: Power State #0 00:14:01.405 Power State #0: 00:14:01.405 Max Power: 0.00 W 00:14:01.405 Non-Operational State: Operational 00:14:01.405 Entry Latency: Not Reported 00:14:01.406 Exit Latency: Not Reported 00:14:01.406 Relative Read Throughput: 0 00:14:01.406 Relative Read Latency: 0 00:14:01.406 Relative Write Throughput: 0 00:14:01.406 Relative Write Latency: 0 00:14:01.406 Idle Power: Not Reported 00:14:01.406 Active Power: Not Reported 00:14:01.406 Non-Operational Permissive Mode: Not Supported 00:14:01.406 00:14:01.406 Health Information 00:14:01.406 ================== 00:14:01.406 Critical Warnings: 00:14:01.406 Available Spare Space: OK 00:14:01.406 Temperature: OK 00:14:01.406 Device Reliability: OK 00:14:01.406 Read Only: No 00:14:01.406 Volatile Memory Backup: OK 00:14:01.406 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:01.406 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:01.406 Available Spare: 0% 00:14:01.406 Available Sp[2024-11-20 16:07:02.201077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:01.406 [2024-11-20 16:07:02.208953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:01.406 [2024-11-20 16:07:02.208981] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:01.406 [2024-11-20 16:07:02.208990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.406 [2024-11-20 16:07:02.208996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.406 [2024-11-20 16:07:02.209003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.406 [2024-11-20 16:07:02.209009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.406 [2024-11-20 16:07:02.209059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:01.406 [2024-11-20 16:07:02.209069] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:01.406 [2024-11-20 16:07:02.210061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.406 [2024-11-20 16:07:02.210106] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:01.406 [2024-11-20 16:07:02.210113] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:01.406 [2024-11-20 16:07:02.211063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:01.406 [2024-11-20 16:07:02.211075] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:01.406 [2024-11-20 16:07:02.211122] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:01.406 [2024-11-20 16:07:02.212102] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:01.665 are Threshold: 0% 00:14:01.665 Life Percentage Used: 0% 00:14:01.665 Data Units Read: 0 00:14:01.665 Data Units Written: 0 00:14:01.665 Host Read Commands: 0 00:14:01.665 Host Write Commands: 0 00:14:01.665 Controller Busy Time: 0 minutes 00:14:01.665 Power Cycles: 0 00:14:01.665 Power On Hours: 0 hours 00:14:01.665 Unsafe Shutdowns: 0 00:14:01.665 Unrecoverable Media Errors: 0 00:14:01.665 Lifetime Error Log Entries: 0 00:14:01.665 Warning Temperature Time: 0 minutes 00:14:01.665 Critical Temperature Time: 0 minutes 00:14:01.665 00:14:01.665 Number of Queues 00:14:01.665 ================ 00:14:01.665 Number of I/O Submission Queues: 127 00:14:01.665 Number of I/O Completion Queues: 127 00:14:01.665 00:14:01.665 Active Namespaces 00:14:01.665 ================= 00:14:01.665 Namespace ID:1 00:14:01.665 Error Recovery Timeout: Unlimited 00:14:01.665 Command Set Identifier: NVM (00h) 00:14:01.665 Deallocate: Supported 00:14:01.665 Deallocated/Unwritten Error: Not Supported 00:14:01.665 Deallocated Read Value: Unknown 00:14:01.665 Deallocate in Write Zeroes: Not Supported 00:14:01.665 Deallocated Guard Field: 0xFFFF 00:14:01.665 Flush: Supported 00:14:01.665 Reservation: Supported 00:14:01.665 Namespace Sharing Capabilities: Multiple Controllers 00:14:01.665 Size (in LBAs): 131072 (0GiB) 00:14:01.665 Capacity (in LBAs): 131072 (0GiB) 00:14:01.665 Utilization (in LBAs): 131072 (0GiB) 00:14:01.665 NGUID: 966903D33F6A47BFACDEFC7CB99A6624 00:14:01.665 UUID: 966903d3-3f6a-47bf-acde-fc7cb99a6624 00:14:01.665 Thin Provisioning: Not Supported 00:14:01.665 Per-NS Atomic Units: Yes 00:14:01.665 Atomic Boundary Size (Normal): 0 00:14:01.665 Atomic Boundary Size (PFail): 0 00:14:01.665 Atomic Boundary Offset: 0 00:14:01.665 Maximum Single Source Range Length: 65535 00:14:01.665 Maximum Copy Length: 65535 00:14:01.665 Maximum Source Range Count: 1 00:14:01.665 NGUID/EUI64 Never Reused: No 00:14:01.665 Namespace Write Protected: No 00:14:01.665 Number of LBA Formats: 1 00:14:01.665 Current LBA Format: LBA Format #00 00:14:01.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:01.665 00:14:01.666 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:01.666 [2024-11-20 16:07:02.438531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.947 Initializing NVMe Controllers 00:14:06.947 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.947 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:06.947 Initialization complete. Launching workers. 00:14:06.947 ======================================================== 00:14:06.947 Latency(us) 00:14:06.947 Device Information : IOPS MiB/s Average min max 00:14:06.947 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39914.80 155.92 3206.65 949.40 9611.16 00:14:06.947 ======================================================== 00:14:06.947 Total : 39914.80 155.92 3206.65 949.40 9611.16 00:14:06.947 00:14:06.947 [2024-11-20 16:07:07.542211] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.947 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:06.947 [2024-11-20 16:07:07.780901] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:12.220 Initializing NVMe Controllers 00:14:12.220 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:12.220 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:12.220 Initialization complete. Launching workers. 00:14:12.220 ======================================================== 00:14:12.220 Latency(us) 00:14:12.220 Device Information : IOPS MiB/s Average min max 00:14:12.220 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39938.96 156.01 3204.71 977.41 7191.89 00:14:12.220 ======================================================== 00:14:12.220 Total : 39938.96 156.01 3204.71 977.41 7191.89 00:14:12.220 00:14:12.220 [2024-11-20 16:07:12.799746] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.220 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:12.220 [2024-11-20 16:07:13.007516] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:17.503 [2024-11-20 16:07:18.147038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:17.504 Initializing NVMe Controllers 00:14:17.504 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:17.504 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:17.504 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:17.504 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:17.504 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:17.504 Initialization complete. Launching workers. 00:14:17.504 Starting thread on core 2 00:14:17.504 Starting thread on core 3 00:14:17.504 Starting thread on core 1 00:14:17.504 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:17.763 [2024-11-20 16:07:18.440769] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.051 [2024-11-20 16:07:21.625148] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.051 Initializing NVMe Controllers 00:14:21.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:21.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:21.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:21.051 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:21.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:21.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:21.051 Initialization complete. Launching workers. 00:14:21.051 Starting thread on core 1 with urgent priority queue 00:14:21.051 Starting thread on core 2 with urgent priority queue 00:14:21.051 Starting thread on core 3 with urgent priority queue 00:14:21.051 Starting thread on core 0 with urgent priority queue 00:14:21.051 SPDK bdev Controller (SPDK2 ) core 0: 9853.33 IO/s 10.15 secs/100000 ios 00:14:21.051 SPDK bdev Controller (SPDK2 ) core 1: 6430.67 IO/s 15.55 secs/100000 ios 00:14:21.051 SPDK bdev Controller (SPDK2 ) core 2: 7011.00 IO/s 14.26 secs/100000 ios 00:14:21.051 SPDK bdev Controller (SPDK2 ) core 3: 7649.33 IO/s 13.07 secs/100000 ios 00:14:21.051 ======================================================== 00:14:21.051 00:14:21.051 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:21.309 [2024-11-20 16:07:21.915360] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.309 Initializing NVMe Controllers 00:14:21.309 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.309 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.309 Namespace ID: 1 size: 0GB 00:14:21.309 Initialization complete. 00:14:21.309 INFO: using host memory buffer for IO 00:14:21.309 Hello world! 00:14:21.309 [2024-11-20 16:07:21.925429] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.309 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:21.568 [2024-11-20 16:07:22.211907] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.506 Initializing NVMe Controllers 00:14:22.506 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:22.506 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:22.506 Initialization complete. Launching workers. 00:14:22.506 submit (in ns) avg, min, max = 6397.4, 3267.0, 3999072.2 00:14:22.506 complete (in ns) avg, min, max = 19839.1, 1808.7, 7986084.3 00:14:22.506 00:14:22.506 Submit histogram 00:14:22.506 ================ 00:14:22.506 Range in us Cumulative Count 00:14:22.506 3.256 - 3.270: 0.0185% ( 3) 00:14:22.506 3.270 - 3.283: 0.5665% ( 89) 00:14:22.506 3.283 - 3.297: 3.0359% ( 401) 00:14:22.506 3.297 - 3.311: 8.0239% ( 810) 00:14:22.506 3.311 - 3.325: 13.8740% ( 950) 00:14:22.506 3.325 - 3.339: 19.9335% ( 984) 00:14:22.506 3.339 - 3.353: 26.1161% ( 1004) 00:14:22.506 3.353 - 3.367: 31.3997% ( 858) 00:14:22.506 3.367 - 3.381: 36.1907% ( 778) 00:14:22.506 3.381 - 3.395: 41.5974% ( 878) 00:14:22.506 3.395 - 3.409: 46.2590% ( 757) 00:14:22.506 3.409 - 3.423: 50.3787% ( 669) 00:14:22.506 3.423 - 3.437: 54.4800% ( 666) 00:14:22.506 3.437 - 3.450: 61.1121% ( 1077) 00:14:22.506 3.450 - 3.464: 67.7628% ( 1080) 00:14:22.507 3.464 - 3.478: 72.4737% ( 765) 00:14:22.507 3.478 - 3.492: 77.4432% ( 807) 00:14:22.507 3.492 - 3.506: 81.8523% ( 716) 00:14:22.507 3.506 - 3.520: 84.6542% ( 455) 00:14:22.507 3.520 - 3.534: 86.5201% ( 303) 00:14:22.507 3.534 - 3.548: 87.4931% ( 158) 00:14:22.507 3.548 - 3.562: 87.8995% ( 66) 00:14:22.507 3.562 - 3.590: 88.6385% ( 120) 00:14:22.507 3.590 - 3.617: 90.0979% ( 237) 00:14:22.507 3.617 - 3.645: 91.6128% ( 246) 00:14:22.507 3.645 - 3.673: 93.1646% ( 252) 00:14:22.507 3.673 - 3.701: 94.7226% ( 253) 00:14:22.507 3.701 - 3.729: 96.3298% ( 261) 00:14:22.507 3.729 - 3.757: 97.6661% ( 217) 00:14:22.507 3.757 - 3.784: 98.5159% ( 138) 00:14:22.507 3.784 - 3.812: 99.0701% ( 90) 00:14:22.507 3.812 - 3.840: 99.4581% ( 63) 00:14:22.507 3.840 - 3.868: 99.6244% ( 27) 00:14:22.507 3.868 - 3.896: 99.6613% ( 6) 00:14:22.507 3.923 - 3.951: 99.6736% ( 2) 00:14:22.507 5.398 - 5.426: 99.6798% ( 1) 00:14:22.507 5.454 - 5.482: 99.6859% ( 1) 00:14:22.507 5.565 - 5.593: 99.6921% ( 1) 00:14:22.507 5.621 - 5.649: 99.6983% ( 1) 00:14:22.507 5.649 - 5.677: 99.7044% ( 1) 00:14:22.507 5.677 - 5.704: 99.7106% ( 1) 00:14:22.507 5.760 - 5.788: 99.7167% ( 1) 00:14:22.507 5.788 - 5.816: 99.7229% ( 1) 00:14:22.507 5.816 - 5.843: 99.7352% ( 2) 00:14:22.507 5.871 - 5.899: 99.7414% ( 1) 00:14:22.507 5.955 - 5.983: 99.7475% ( 1) 00:14:22.507 6.066 - 6.094: 99.7537% ( 1) 00:14:22.507 6.094 - 6.122: 99.7598% ( 1) 00:14:22.507 6.122 - 6.150: 99.7660% ( 1) 00:14:22.507 6.177 - 6.205: 99.7783% ( 2) 00:14:22.507 6.233 - 6.261: 99.7845% ( 1) 00:14:22.507 6.344 - 6.372: 99.7906% ( 1) 00:14:22.507 6.400 - 6.428: 99.7968% ( 1) 00:14:22.507 6.456 - 6.483: 99.8029% ( 1) 00:14:22.507 6.539 - 6.567: 99.8091% ( 1) 00:14:22.507 6.595 - 6.623: 99.8214% ( 2) 00:14:22.507 6.734 - 6.762: 99.8276% ( 1) 00:14:22.507 6.790 - 6.817: 99.8337% ( 1) 00:14:22.507 6.817 - 6.845: 99.8399% ( 1) 00:14:22.507 6.901 - 6.929: 99.8460% ( 1) 00:14:22.507 6.984 - 7.012: 99.8522% ( 1) 00:14:22.507 7.179 - 7.235: 99.8584% ( 1) 00:14:22.507 7.235 - 7.290: 99.8645% ( 1) 00:14:22.507 7.513 - 7.569: 99.8707% ( 1) 00:14:22.507 7.569 - 7.624: 99.8768% ( 1) 00:14:22.507 7.680 - 7.736: 99.8892% ( 2) 00:14:22.507 7.791 - 7.847: 99.8953% ( 1) 00:14:22.507 7.847 - 7.903: 99.9015% ( 1) 00:14:22.507 9.405 - 9.461: 99.9076% ( 1) 00:14:22.507 9.739 - 9.795: 99.9138% ( 1) 00:14:22.507 10.963 - 11.019: 99.9199% ( 1) 00:14:22.507 11.798 - 11.854: 99.9261% ( 1) 00:14:22.507 3989.148 - 4017.642: 100.0000% ( 12) 00:14:22.507 00:14:22.507 Complete histogram 00:14:22.507 ================== 00:14:22.507 Range in us Cumulative Count 00:14:22.507 1.809 - 1.823: 1.2439% ( 202) 00:14:22.507 1.823 - 1.837: 4.4584% ( 522) 00:14:22.507 1.837 - 1.850: 5.8440% ( 225) 00:14:22.507 1.850 - 1.864: 10.5302% ( 761) 00:14:22.507 1.864 - [2024-11-20 16:07:23.309998] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.767 1.878: 59.5234% ( 7956) 00:14:22.767 1.878 - 1.892: 88.6754% ( 4734) 00:14:22.767 1.892 - 1.906: 94.1560% ( 890) 00:14:22.767 1.906 - 1.920: 95.8741% ( 279) 00:14:22.767 1.920 - 1.934: 96.4284% ( 90) 00:14:22.767 1.934 - 1.948: 97.5737% ( 186) 00:14:22.767 1.948 - 1.962: 98.6206% ( 170) 00:14:22.767 1.962 - 1.976: 99.2179% ( 97) 00:14:22.767 1.976 - 1.990: 99.3288% ( 18) 00:14:22.767 1.990 - 2.003: 99.3473% ( 3) 00:14:22.767 2.003 - 2.017: 99.3596% ( 2) 00:14:22.767 2.031 - 2.045: 99.3657% ( 1) 00:14:22.767 2.059 - 2.073: 99.3719% ( 1) 00:14:22.767 2.073 - 2.087: 99.3904% ( 3) 00:14:22.767 2.087 - 2.101: 99.3965% ( 1) 00:14:22.767 2.101 - 2.115: 99.4027% ( 1) 00:14:22.767 2.157 - 2.170: 99.4088% ( 1) 00:14:22.767 3.562 - 3.590: 99.4150% ( 1) 00:14:22.767 3.757 - 3.784: 99.4211% ( 1) 00:14:22.767 3.840 - 3.868: 99.4273% ( 1) 00:14:22.767 3.868 - 3.896: 99.4335% ( 1) 00:14:22.767 3.896 - 3.923: 99.4396% ( 1) 00:14:22.767 4.007 - 4.035: 99.4519% ( 2) 00:14:22.767 4.035 - 4.063: 99.4581% ( 1) 00:14:22.767 4.341 - 4.369: 99.4643% ( 1) 00:14:22.767 4.397 - 4.424: 99.4704% ( 1) 00:14:22.767 4.591 - 4.619: 99.4766% ( 1) 00:14:22.767 4.675 - 4.703: 99.4827% ( 1) 00:14:22.767 4.730 - 4.758: 99.4950% ( 2) 00:14:22.767 4.786 - 4.814: 99.5012% ( 1) 00:14:22.767 4.842 - 4.870: 99.5074% ( 1) 00:14:22.767 4.870 - 4.897: 99.5135% ( 1) 00:14:22.767 5.064 - 5.092: 99.5197% ( 1) 00:14:22.767 5.203 - 5.231: 99.5258% ( 1) 00:14:22.767 5.871 - 5.899: 99.5320% ( 1) 00:14:22.767 6.150 - 6.177: 99.5381% ( 1) 00:14:22.767 6.261 - 6.289: 99.5443% ( 1) 00:14:22.767 33.391 - 33.614: 99.5505% ( 1) 00:14:22.767 182.539 - 183.430: 99.5566% ( 1) 00:14:22.767 3846.678 - 3875.172: 99.5628% ( 1) 00:14:22.767 3989.148 - 4017.642: 99.9938% ( 70) 00:14:22.767 7978.296 - 8035.283: 100.0000% ( 1) 00:14:22.767 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:22.767 [ 00:14:22.767 { 00:14:22.767 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:22.767 "subtype": "Discovery", 00:14:22.767 "listen_addresses": [], 00:14:22.767 "allow_any_host": true, 00:14:22.767 "hosts": [] 00:14:22.767 }, 00:14:22.767 { 00:14:22.767 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:22.767 "subtype": "NVMe", 00:14:22.767 "listen_addresses": [ 00:14:22.767 { 00:14:22.767 "trtype": "VFIOUSER", 00:14:22.767 "adrfam": "IPv4", 00:14:22.767 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:22.767 "trsvcid": "0" 00:14:22.767 } 00:14:22.767 ], 00:14:22.767 "allow_any_host": true, 00:14:22.767 "hosts": [], 00:14:22.767 "serial_number": "SPDK1", 00:14:22.767 "model_number": "SPDK bdev Controller", 00:14:22.767 "max_namespaces": 32, 00:14:22.767 "min_cntlid": 1, 00:14:22.767 "max_cntlid": 65519, 00:14:22.767 "namespaces": [ 00:14:22.767 { 00:14:22.767 "nsid": 1, 00:14:22.767 "bdev_name": "Malloc1", 00:14:22.767 "name": "Malloc1", 00:14:22.767 "nguid": "04706D8CD2554872839431E3017A19CB", 00:14:22.767 "uuid": "04706d8c-d255-4872-8394-31e3017a19cb" 00:14:22.767 }, 00:14:22.767 { 00:14:22.767 "nsid": 2, 00:14:22.767 "bdev_name": "Malloc3", 00:14:22.767 "name": "Malloc3", 00:14:22.767 "nguid": "A2EE4558D47346D3B45505EA565FED91", 00:14:22.767 "uuid": "a2ee4558-d473-46d3-b455-05ea565fed91" 00:14:22.767 } 00:14:22.767 ] 00:14:22.767 }, 00:14:22.767 { 00:14:22.767 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:22.767 "subtype": "NVMe", 00:14:22.767 "listen_addresses": [ 00:14:22.767 { 00:14:22.767 "trtype": "VFIOUSER", 00:14:22.767 "adrfam": "IPv4", 00:14:22.767 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:22.767 "trsvcid": "0" 00:14:22.767 } 00:14:22.767 ], 00:14:22.767 "allow_any_host": true, 00:14:22.767 "hosts": [], 00:14:22.767 "serial_number": "SPDK2", 00:14:22.767 "model_number": "SPDK bdev Controller", 00:14:22.767 "max_namespaces": 32, 00:14:22.767 "min_cntlid": 1, 00:14:22.767 "max_cntlid": 65519, 00:14:22.767 "namespaces": [ 00:14:22.767 { 00:14:22.767 "nsid": 1, 00:14:22.767 "bdev_name": "Malloc2", 00:14:22.767 "name": "Malloc2", 00:14:22.767 "nguid": "966903D33F6A47BFACDEFC7CB99A6624", 00:14:22.767 "uuid": "966903d3-3f6a-47bf-acde-fc7cb99a6624" 00:14:22.767 } 00:14:22.767 ] 00:14:22.767 } 00:14:22.767 ] 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2705400 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:22.767 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:23.027 [2024-11-20 16:07:23.720328] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.027 Malloc4 00:14:23.027 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:23.287 [2024-11-20 16:07:23.956102] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:23.287 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:23.287 Asynchronous Event Request test 00:14:23.287 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:23.287 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:23.287 Registering asynchronous event callbacks... 00:14:23.287 Starting namespace attribute notice tests for all controllers... 00:14:23.287 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:23.287 aer_cb - Changed Namespace 00:14:23.287 Cleaning up... 00:14:23.547 [ 00:14:23.547 { 00:14:23.547 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:23.547 "subtype": "Discovery", 00:14:23.547 "listen_addresses": [], 00:14:23.547 "allow_any_host": true, 00:14:23.547 "hosts": [] 00:14:23.547 }, 00:14:23.547 { 00:14:23.547 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:23.547 "subtype": "NVMe", 00:14:23.547 "listen_addresses": [ 00:14:23.547 { 00:14:23.547 "trtype": "VFIOUSER", 00:14:23.547 "adrfam": "IPv4", 00:14:23.547 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:23.547 "trsvcid": "0" 00:14:23.547 } 00:14:23.547 ], 00:14:23.547 "allow_any_host": true, 00:14:23.547 "hosts": [], 00:14:23.547 "serial_number": "SPDK1", 00:14:23.547 "model_number": "SPDK bdev Controller", 00:14:23.547 "max_namespaces": 32, 00:14:23.547 "min_cntlid": 1, 00:14:23.547 "max_cntlid": 65519, 00:14:23.547 "namespaces": [ 00:14:23.547 { 00:14:23.547 "nsid": 1, 00:14:23.547 "bdev_name": "Malloc1", 00:14:23.547 "name": "Malloc1", 00:14:23.547 "nguid": "04706D8CD2554872839431E3017A19CB", 00:14:23.547 "uuid": "04706d8c-d255-4872-8394-31e3017a19cb" 00:14:23.547 }, 00:14:23.547 { 00:14:23.547 "nsid": 2, 00:14:23.547 "bdev_name": "Malloc3", 00:14:23.547 "name": "Malloc3", 00:14:23.547 "nguid": "A2EE4558D47346D3B45505EA565FED91", 00:14:23.547 "uuid": "a2ee4558-d473-46d3-b455-05ea565fed91" 00:14:23.547 } 00:14:23.547 ] 00:14:23.547 }, 00:14:23.547 { 00:14:23.547 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:23.547 "subtype": "NVMe", 00:14:23.547 "listen_addresses": [ 00:14:23.547 { 00:14:23.547 "trtype": "VFIOUSER", 00:14:23.547 "adrfam": "IPv4", 00:14:23.547 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:23.547 "trsvcid": "0" 00:14:23.547 } 00:14:23.547 ], 00:14:23.547 "allow_any_host": true, 00:14:23.547 "hosts": [], 00:14:23.547 "serial_number": "SPDK2", 00:14:23.547 "model_number": "SPDK bdev Controller", 00:14:23.547 "max_namespaces": 32, 00:14:23.547 "min_cntlid": 1, 00:14:23.547 "max_cntlid": 65519, 00:14:23.547 "namespaces": [ 00:14:23.547 { 00:14:23.547 "nsid": 1, 00:14:23.547 "bdev_name": "Malloc2", 00:14:23.547 "name": "Malloc2", 00:14:23.547 "nguid": "966903D33F6A47BFACDEFC7CB99A6624", 00:14:23.547 "uuid": "966903d3-3f6a-47bf-acde-fc7cb99a6624" 00:14:23.547 }, 00:14:23.547 { 00:14:23.547 "nsid": 2, 00:14:23.547 "bdev_name": "Malloc4", 00:14:23.547 "name": "Malloc4", 00:14:23.547 "nguid": "BDBBBB86781249B28732D1B237652EDD", 00:14:23.547 "uuid": "bdbbbb86-7812-49b2-8732-d1b237652edd" 00:14:23.547 } 00:14:23.547 ] 00:14:23.547 } 00:14:23.547 ] 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2705400 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2697643 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2697643 ']' 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2697643 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697643 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697643' 00:14:23.547 killing process with pid 2697643 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2697643 00:14:23.547 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2697643 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2705428 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2705428' 00:14:23.807 Process pid: 2705428 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2705428 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2705428 ']' 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.807 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:23.807 [2024-11-20 16:07:24.517516] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:23.807 [2024-11-20 16:07:24.518422] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:23.807 [2024-11-20 16:07:24.518459] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.807 [2024-11-20 16:07:24.595480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.807 [2024-11-20 16:07:24.638184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.807 [2024-11-20 16:07:24.638220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.807 [2024-11-20 16:07:24.638228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.807 [2024-11-20 16:07:24.638234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.807 [2024-11-20 16:07:24.638240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.807 [2024-11-20 16:07:24.639745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.807 [2024-11-20 16:07:24.639879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.807 [2024-11-20 16:07:24.639987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.807 [2024-11-20 16:07:24.639987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.067 [2024-11-20 16:07:24.710068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:24.067 [2024-11-20 16:07:24.710275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:24.067 [2024-11-20 16:07:24.710926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:24.067 [2024-11-20 16:07:24.711247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:24.067 [2024-11-20 16:07:24.711296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:24.067 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.067 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:24.067 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:25.004 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:25.263 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:25.263 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:25.263 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:25.263 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:25.263 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:25.522 Malloc1 00:14:25.522 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:25.781 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:25.782 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:26.040 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.040 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:26.040 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:26.298 Malloc2 00:14:26.298 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:26.557 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2705428 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2705428 ']' 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2705428 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:26.815 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.816 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705428 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705428' 00:14:27.075 killing process with pid 2705428 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2705428 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2705428 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:27.075 00:14:27.075 real 0m51.034s 00:14:27.075 user 3m17.467s 00:14:27.075 sys 0m3.181s 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:27.075 ************************************ 00:14:27.075 END TEST nvmf_vfio_user 00:14:27.075 ************************************ 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.075 16:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.336 ************************************ 00:14:27.336 START TEST nvmf_vfio_user_nvme_compliance 00:14:27.336 ************************************ 00:14:27.337 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:27.337 * Looking for test storage... 00:14:27.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.337 --rc genhtml_branch_coverage=1 00:14:27.337 --rc genhtml_function_coverage=1 00:14:27.337 --rc genhtml_legend=1 00:14:27.337 --rc geninfo_all_blocks=1 00:14:27.337 --rc geninfo_unexecuted_blocks=1 00:14:27.337 00:14:27.337 ' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.337 --rc genhtml_branch_coverage=1 00:14:27.337 --rc genhtml_function_coverage=1 00:14:27.337 --rc genhtml_legend=1 00:14:27.337 --rc geninfo_all_blocks=1 00:14:27.337 --rc geninfo_unexecuted_blocks=1 00:14:27.337 00:14:27.337 ' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.337 --rc genhtml_branch_coverage=1 00:14:27.337 --rc genhtml_function_coverage=1 00:14:27.337 --rc genhtml_legend=1 00:14:27.337 --rc geninfo_all_blocks=1 00:14:27.337 --rc geninfo_unexecuted_blocks=1 00:14:27.337 00:14:27.337 ' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:27.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.337 --rc genhtml_branch_coverage=1 00:14:27.337 --rc genhtml_function_coverage=1 00:14:27.337 --rc genhtml_legend=1 00:14:27.337 --rc geninfo_all_blocks=1 00:14:27.337 --rc geninfo_unexecuted_blocks=1 00:14:27.337 00:14:27.337 ' 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.337 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2706188 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2706188' 00:14:27.338 Process pid: 2706188 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2706188 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2706188 ']' 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.338 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:27.598 [2024-11-20 16:07:28.191318] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:27.598 [2024-11-20 16:07:28.191369] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.598 [2024-11-20 16:07:28.266295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.598 [2024-11-20 16:07:28.305622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.598 [2024-11-20 16:07:28.305661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.598 [2024-11-20 16:07:28.305669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.598 [2024-11-20 16:07:28.305674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.598 [2024-11-20 16:07:28.305679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.598 [2024-11-20 16:07:28.307022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.598 [2024-11-20 16:07:28.307127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.598 [2024-11-20 16:07:28.307128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.598 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.598 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:27.598 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.988 malloc0 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.988 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:28.988 00:14:28.988 00:14:28.988 CUnit - A unit testing framework for C - Version 2.1-3 00:14:28.988 http://cunit.sourceforge.net/ 00:14:28.988 00:14:28.988 00:14:28.988 Suite: nvme_compliance 00:14:28.988 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 16:07:29.654457] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.988 [2024-11-20 16:07:29.655798] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:28.988 [2024-11-20 16:07:29.655813] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:28.988 [2024-11-20 16:07:29.655820] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:28.988 [2024-11-20 16:07:29.657471] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.988 passed 00:14:28.988 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 16:07:29.737073] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.988 [2024-11-20 16:07:29.740095] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.988 passed 00:14:28.988 Test: admin_identify_ns ...[2024-11-20 16:07:29.822332] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.248 [2024-11-20 16:07:29.882958] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:29.248 [2024-11-20 16:07:29.890966] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:29.248 [2024-11-20 16:07:29.912052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.248 passed 00:14:29.248 Test: admin_get_features_mandatory_features ...[2024-11-20 16:07:29.987271] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.248 [2024-11-20 16:07:29.990291] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.248 passed 00:14:29.248 Test: admin_get_features_optional_features ...[2024-11-20 16:07:30.071913] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.248 [2024-11-20 16:07:30.074934] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.507 passed 00:14:29.507 Test: admin_set_features_number_of_queues ...[2024-11-20 16:07:30.154009] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.507 [2024-11-20 16:07:30.260113] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.507 passed 00:14:29.507 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 16:07:30.338384] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.507 [2024-11-20 16:07:30.341400] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.766 passed 00:14:29.766 Test: admin_get_log_page_with_lpo ...[2024-11-20 16:07:30.421517] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.766 [2024-11-20 16:07:30.489961] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:29.766 [2024-11-20 16:07:30.503022] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.766 passed 00:14:29.766 Test: fabric_property_get ...[2024-11-20 16:07:30.578280] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.766 [2024-11-20 16:07:30.579526] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:29.766 [2024-11-20 16:07:30.581302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.026 passed 00:14:30.026 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 16:07:30.660812] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.026 [2024-11-20 16:07:30.662047] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:30.026 [2024-11-20 16:07:30.666852] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.026 passed 00:14:30.026 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 16:07:30.743868] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.026 [2024-11-20 16:07:30.826956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:30.026 [2024-11-20 16:07:30.842957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:30.026 [2024-11-20 16:07:30.848026] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.285 passed 00:14:30.285 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 16:07:30.925260] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.285 [2024-11-20 16:07:30.926483] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:30.285 [2024-11-20 16:07:30.928281] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.285 passed 00:14:30.285 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 16:07:31.005369] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.285 [2024-11-20 16:07:31.084957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:30.285 [2024-11-20 16:07:31.108958] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:30.285 [2024-11-20 16:07:31.114034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.544 passed 00:14:30.544 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 16:07:31.189129] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.544 [2024-11-20 16:07:31.190375] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:30.544 [2024-11-20 16:07:31.190400] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:30.544 [2024-11-20 16:07:31.192157] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.544 passed 00:14:30.544 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 16:07:31.271338] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.544 [2024-11-20 16:07:31.362965] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:30.544 [2024-11-20 16:07:31.370958] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:30.544 [2024-11-20 16:07:31.378958] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:30.803 [2024-11-20 16:07:31.386953] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:30.803 [2024-11-20 16:07:31.416042] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.803 passed 00:14:30.803 Test: admin_create_io_sq_verify_pc ...[2024-11-20 16:07:31.491617] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.803 [2024-11-20 16:07:31.510964] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:30.803 [2024-11-20 16:07:31.528808] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.803 passed 00:14:30.803 Test: admin_create_io_qp_max_qps ...[2024-11-20 16:07:31.604366] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.181 [2024-11-20 16:07:32.701957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:32.439 [2024-11-20 16:07:33.080516] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.439 passed 00:14:32.439 Test: admin_create_io_sq_shared_cq ...[2024-11-20 16:07:33.159491] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.699 [2024-11-20 16:07:33.291959] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:32.699 [2024-11-20 16:07:33.329024] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.699 passed 00:14:32.699 00:14:32.699 Run Summary: Type Total Ran Passed Failed Inactive 00:14:32.699 suites 1 1 n/a 0 0 00:14:32.699 tests 18 18 18 0 0 00:14:32.699 asserts 360 360 360 0 n/a 00:14:32.699 00:14:32.699 Elapsed time = 1.506 seconds 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2706188 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2706188 ']' 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2706188 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2706188 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2706188' 00:14:32.699 killing process with pid 2706188 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2706188 00:14:32.699 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2706188 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:32.958 00:14:32.958 real 0m5.678s 00:14:32.958 user 0m15.866s 00:14:32.958 sys 0m0.527s 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.958 ************************************ 00:14:32.958 END TEST nvmf_vfio_user_nvme_compliance 00:14:32.958 ************************************ 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.958 ************************************ 00:14:32.958 START TEST nvmf_vfio_user_fuzz 00:14:32.958 ************************************ 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:32.958 * Looking for test storage... 00:14:32.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:32.958 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.217 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.218 --rc genhtml_branch_coverage=1 00:14:33.218 --rc genhtml_function_coverage=1 00:14:33.218 --rc genhtml_legend=1 00:14:33.218 --rc geninfo_all_blocks=1 00:14:33.218 --rc geninfo_unexecuted_blocks=1 00:14:33.218 00:14:33.218 ' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.218 --rc genhtml_branch_coverage=1 00:14:33.218 --rc genhtml_function_coverage=1 00:14:33.218 --rc genhtml_legend=1 00:14:33.218 --rc geninfo_all_blocks=1 00:14:33.218 --rc geninfo_unexecuted_blocks=1 00:14:33.218 00:14:33.218 ' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.218 --rc genhtml_branch_coverage=1 00:14:33.218 --rc genhtml_function_coverage=1 00:14:33.218 --rc genhtml_legend=1 00:14:33.218 --rc geninfo_all_blocks=1 00:14:33.218 --rc geninfo_unexecuted_blocks=1 00:14:33.218 00:14:33.218 ' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.218 --rc genhtml_branch_coverage=1 00:14:33.218 --rc genhtml_function_coverage=1 00:14:33.218 --rc genhtml_legend=1 00:14:33.218 --rc geninfo_all_blocks=1 00:14:33.218 --rc geninfo_unexecuted_blocks=1 00:14:33.218 00:14:33.218 ' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:33.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:33.218 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2707172 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2707172' 00:14:33.219 Process pid: 2707172 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2707172 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2707172 ']' 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.219 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:33.478 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.478 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:33.478 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 malloc0 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:34.413 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:06.490 Fuzzing completed. Shutting down the fuzz application 00:15:06.490 00:15:06.490 Dumping successful admin opcodes: 00:15:06.490 8, 9, 10, 24, 00:15:06.490 Dumping successful io opcodes: 00:15:06.490 0, 00:15:06.490 NS: 0x20000081ef00 I/O qp, Total commands completed: 1114751, total successful commands: 4386, random_seed: 714882048 00:15:06.490 NS: 0x20000081ef00 admin qp, Total commands completed: 275734, total successful commands: 2229, random_seed: 69343616 00:15:06.490 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2707172 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2707172 ']' 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2707172 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2707172 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2707172' 00:15:06.491 killing process with pid 2707172 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2707172 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2707172 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:06.491 00:15:06.491 real 0m32.222s 00:15:06.491 user 0m34.069s 00:15:06.491 sys 0m27.330s 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.491 ************************************ 00:15:06.491 END TEST nvmf_vfio_user_fuzz 00:15:06.491 ************************************ 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.491 ************************************ 00:15:06.491 START TEST nvmf_auth_target 00:15:06.491 ************************************ 00:15:06.491 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:06.491 * Looking for test storage... 00:15:06.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.491 --rc genhtml_branch_coverage=1 00:15:06.491 --rc genhtml_function_coverage=1 00:15:06.491 --rc genhtml_legend=1 00:15:06.491 --rc geninfo_all_blocks=1 00:15:06.491 --rc geninfo_unexecuted_blocks=1 00:15:06.491 00:15:06.491 ' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.491 --rc genhtml_branch_coverage=1 00:15:06.491 --rc genhtml_function_coverage=1 00:15:06.491 --rc genhtml_legend=1 00:15:06.491 --rc geninfo_all_blocks=1 00:15:06.491 --rc geninfo_unexecuted_blocks=1 00:15:06.491 00:15:06.491 ' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.491 --rc genhtml_branch_coverage=1 00:15:06.491 --rc genhtml_function_coverage=1 00:15:06.491 --rc genhtml_legend=1 00:15:06.491 --rc geninfo_all_blocks=1 00:15:06.491 --rc geninfo_unexecuted_blocks=1 00:15:06.491 00:15:06.491 ' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.491 --rc genhtml_branch_coverage=1 00:15:06.491 --rc genhtml_function_coverage=1 00:15:06.491 --rc genhtml_legend=1 00:15:06.491 --rc geninfo_all_blocks=1 00:15:06.491 --rc geninfo_unexecuted_blocks=1 00:15:06.491 00:15:06.491 ' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.491 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:06.492 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.882 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.882 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:11.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:11.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:11.883 Found net devices under 0000:86:00.0: cvl_0_0 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:11.883 Found net devices under 0000:86:00.1: cvl_0_1 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:11.883 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.883 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.883 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:11.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:15:11.884 00:15:11.884 --- 10.0.0.2 ping statistics --- 00:15:11.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.884 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:15:11.884 00:15:11.884 --- 10.0.0.1 ping statistics --- 00:15:11.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.884 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2715476 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2715476 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2715476 ']' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2715588 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0c7a46c2f268e9bdabeb339eec56ef738caf9cf795fcd504 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pAB 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0c7a46c2f268e9bdabeb339eec56ef738caf9cf795fcd504 0 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0c7a46c2f268e9bdabeb339eec56ef738caf9cf795fcd504 0 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0c7a46c2f268e9bdabeb339eec56ef738caf9cf795fcd504 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pAB 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pAB 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pAB 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7f3d8ca04cc75b8e12ce4e3d1c8824b0ce652e3563661cb287d349d07710d95b 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fr9 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7f3d8ca04cc75b8e12ce4e3d1c8824b0ce652e3563661cb287d349d07710d95b 3 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7f3d8ca04cc75b8e12ce4e3d1c8824b0ce652e3563661cb287d349d07710d95b 3 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7f3d8ca04cc75b8e12ce4e3d1c8824b0ce652e3563661cb287d349d07710d95b 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fr9 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fr9 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.fr9 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ecc2d64ae48b2da43c7ec7f0e34f339 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.g8J 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ecc2d64ae48b2da43c7ec7f0e34f339 1 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ecc2d64ae48b2da43c7ec7f0e34f339 1 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ecc2d64ae48b2da43c7ec7f0e34f339 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.g8J 00:15:11.884 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.g8J 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.g8J 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fee1caea1cb8f807e60247cd1cc4de4b1aaef0d695553e67 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vZ7 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fee1caea1cb8f807e60247cd1cc4de4b1aaef0d695553e67 2 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fee1caea1cb8f807e60247cd1cc4de4b1aaef0d695553e67 2 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fee1caea1cb8f807e60247cd1cc4de4b1aaef0d695553e67 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vZ7 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vZ7 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.vZ7 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6d1884030d2ca7adabe88afafc1c53613b391c3ccd00d689 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vUj 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6d1884030d2ca7adabe88afafc1c53613b391c3ccd00d689 2 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6d1884030d2ca7adabe88afafc1c53613b391c3ccd00d689 2 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6d1884030d2ca7adabe88afafc1c53613b391c3ccd00d689 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:11.885 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vUj 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vUj 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.vUj 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=82a3b5ecdc1bae94b53406019ab7c1e0 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Nox 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 82a3b5ecdc1bae94b53406019ab7c1e0 1 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 82a3b5ecdc1bae94b53406019ab7c1e0 1 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=82a3b5ecdc1bae94b53406019ab7c1e0 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Nox 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Nox 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Nox 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9e93136f02920e00866c90610be7e7536838cde21bc7b7fc583fd40c8131b4b4 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.L6T 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9e93136f02920e00866c90610be7e7536838cde21bc7b7fc583fd40c8131b4b4 3 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9e93136f02920e00866c90610be7e7536838cde21bc7b7fc583fd40c8131b4b4 3 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9e93136f02920e00866c90610be7e7536838cde21bc7b7fc583fd40c8131b4b4 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.L6T 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.L6T 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.L6T 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2715476 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2715476 ']' 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.145 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.146 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.146 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2715588 /var/tmp/host.sock 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2715588 ']' 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:12.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.405 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pAB 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pAB 00:15:12.663 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pAB 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.fr9 ]] 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fr9 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fr9 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fr9 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.g8J 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.g8J 00:15:12.922 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.g8J 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.vZ7 ]] 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vZ7 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vZ7 00:15:13.181 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vZ7 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vUj 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vUj 00:15:13.439 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vUj 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Nox ]] 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nox 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nox 00:15:13.698 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nox 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L6T 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.L6T 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.L6T 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:13.958 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.217 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.476 00:15:14.476 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.476 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.476 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.735 { 00:15:14.735 "cntlid": 1, 00:15:14.735 "qid": 0, 00:15:14.735 "state": "enabled", 00:15:14.735 "thread": "nvmf_tgt_poll_group_000", 00:15:14.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.735 "listen_address": { 00:15:14.735 "trtype": "TCP", 00:15:14.735 "adrfam": "IPv4", 00:15:14.735 "traddr": "10.0.0.2", 00:15:14.735 "trsvcid": "4420" 00:15:14.735 }, 00:15:14.735 "peer_address": { 00:15:14.735 "trtype": "TCP", 00:15:14.735 "adrfam": "IPv4", 00:15:14.735 "traddr": "10.0.0.1", 00:15:14.735 "trsvcid": "40610" 00:15:14.735 }, 00:15:14.735 "auth": { 00:15:14.735 "state": "completed", 00:15:14.735 "digest": "sha256", 00:15:14.735 "dhgroup": "null" 00:15:14.735 } 00:15:14.735 } 00:15:14.735 ]' 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:14.735 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.994 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.994 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.994 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.994 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:14.994 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.562 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.826 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.085 00:15:16.085 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.085 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.085 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.344 { 00:15:16.344 "cntlid": 3, 00:15:16.344 "qid": 0, 00:15:16.344 "state": "enabled", 00:15:16.344 "thread": "nvmf_tgt_poll_group_000", 00:15:16.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.344 "listen_address": { 00:15:16.344 "trtype": "TCP", 00:15:16.344 "adrfam": "IPv4", 00:15:16.344 "traddr": "10.0.0.2", 00:15:16.344 "trsvcid": "4420" 00:15:16.344 }, 00:15:16.344 "peer_address": { 00:15:16.344 "trtype": "TCP", 00:15:16.344 "adrfam": "IPv4", 00:15:16.344 "traddr": "10.0.0.1", 00:15:16.344 "trsvcid": "40634" 00:15:16.344 }, 00:15:16.344 "auth": { 00:15:16.344 "state": "completed", 00:15:16.344 "digest": "sha256", 00:15:16.344 "dhgroup": "null" 00:15:16.344 } 00:15:16.344 } 00:15:16.344 ]' 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.344 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.603 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.603 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.603 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.603 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:16.603 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.169 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.428 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.429 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.687 00:15:17.687 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.687 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.687 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.946 { 00:15:17.946 "cntlid": 5, 00:15:17.946 "qid": 0, 00:15:17.946 "state": "enabled", 00:15:17.946 "thread": "nvmf_tgt_poll_group_000", 00:15:17.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.946 "listen_address": { 00:15:17.946 "trtype": "TCP", 00:15:17.946 "adrfam": "IPv4", 00:15:17.946 "traddr": "10.0.0.2", 00:15:17.946 "trsvcid": "4420" 00:15:17.946 }, 00:15:17.946 "peer_address": { 00:15:17.946 "trtype": "TCP", 00:15:17.946 "adrfam": "IPv4", 00:15:17.946 "traddr": "10.0.0.1", 00:15:17.946 "trsvcid": "40652" 00:15:17.946 }, 00:15:17.946 "auth": { 00:15:17.946 "state": "completed", 00:15:17.946 "digest": "sha256", 00:15:17.946 "dhgroup": "null" 00:15:17.946 } 00:15:17.946 } 00:15:17.946 ]' 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:17.946 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.205 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.205 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.205 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.205 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:18.205 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.773 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.031 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.290 00:15:19.290 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.290 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.290 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.548 { 00:15:19.548 "cntlid": 7, 00:15:19.548 "qid": 0, 00:15:19.548 "state": "enabled", 00:15:19.548 "thread": "nvmf_tgt_poll_group_000", 00:15:19.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.548 "listen_address": { 00:15:19.548 "trtype": "TCP", 00:15:19.548 "adrfam": "IPv4", 00:15:19.548 "traddr": "10.0.0.2", 00:15:19.548 "trsvcid": "4420" 00:15:19.548 }, 00:15:19.548 "peer_address": { 00:15:19.548 "trtype": "TCP", 00:15:19.548 "adrfam": "IPv4", 00:15:19.548 "traddr": "10.0.0.1", 00:15:19.548 "trsvcid": "40662" 00:15:19.548 }, 00:15:19.548 "auth": { 00:15:19.548 "state": "completed", 00:15:19.548 "digest": "sha256", 00:15:19.548 "dhgroup": "null" 00:15:19.548 } 00:15:19.548 } 00:15:19.548 ]' 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.548 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.549 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.549 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:19.549 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.808 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.808 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.808 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.808 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:19.808 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.374 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.375 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.633 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.893 00:15:20.893 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.893 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.893 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.152 { 00:15:21.152 "cntlid": 9, 00:15:21.152 "qid": 0, 00:15:21.152 "state": "enabled", 00:15:21.152 "thread": "nvmf_tgt_poll_group_000", 00:15:21.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.152 "listen_address": { 00:15:21.152 "trtype": "TCP", 00:15:21.152 "adrfam": "IPv4", 00:15:21.152 "traddr": "10.0.0.2", 00:15:21.152 "trsvcid": "4420" 00:15:21.152 }, 00:15:21.152 "peer_address": { 00:15:21.152 "trtype": "TCP", 00:15:21.152 "adrfam": "IPv4", 00:15:21.152 "traddr": "10.0.0.1", 00:15:21.152 "trsvcid": "40684" 00:15:21.152 }, 00:15:21.152 "auth": { 00:15:21.152 "state": "completed", 00:15:21.152 "digest": "sha256", 00:15:21.152 "dhgroup": "ffdhe2048" 00:15:21.152 } 00:15:21.152 } 00:15:21.152 ]' 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.152 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.411 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.411 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.411 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.411 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:21.411 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.978 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.238 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.238 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.238 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.238 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.497 00:15:22.497 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.497 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.497 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.756 { 00:15:22.756 "cntlid": 11, 00:15:22.756 "qid": 0, 00:15:22.756 "state": "enabled", 00:15:22.756 "thread": "nvmf_tgt_poll_group_000", 00:15:22.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.756 "listen_address": { 00:15:22.756 "trtype": "TCP", 00:15:22.756 "adrfam": "IPv4", 00:15:22.756 "traddr": "10.0.0.2", 00:15:22.756 "trsvcid": "4420" 00:15:22.756 }, 00:15:22.756 "peer_address": { 00:15:22.756 "trtype": "TCP", 00:15:22.756 "adrfam": "IPv4", 00:15:22.756 "traddr": "10.0.0.1", 00:15:22.756 "trsvcid": "40714" 00:15:22.756 }, 00:15:22.756 "auth": { 00:15:22.756 "state": "completed", 00:15:22.756 "digest": "sha256", 00:15:22.756 "dhgroup": "ffdhe2048" 00:15:22.756 } 00:15:22.756 } 00:15:22.756 ]' 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.756 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.015 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.015 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.015 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.015 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:23.015 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.583 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.842 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.102 00:15:24.102 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.102 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.102 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.360 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.360 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.361 { 00:15:24.361 "cntlid": 13, 00:15:24.361 "qid": 0, 00:15:24.361 "state": "enabled", 00:15:24.361 "thread": "nvmf_tgt_poll_group_000", 00:15:24.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.361 "listen_address": { 00:15:24.361 "trtype": "TCP", 00:15:24.361 "adrfam": "IPv4", 00:15:24.361 "traddr": "10.0.0.2", 00:15:24.361 "trsvcid": "4420" 00:15:24.361 }, 00:15:24.361 "peer_address": { 00:15:24.361 "trtype": "TCP", 00:15:24.361 "adrfam": "IPv4", 00:15:24.361 "traddr": "10.0.0.1", 00:15:24.361 "trsvcid": "45172" 00:15:24.361 }, 00:15:24.361 "auth": { 00:15:24.361 "state": "completed", 00:15:24.361 "digest": "sha256", 00:15:24.361 "dhgroup": "ffdhe2048" 00:15:24.361 } 00:15:24.361 } 00:15:24.361 ]' 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:24.361 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.619 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.619 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.619 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.619 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:24.619 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:25.187 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.187 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.445 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.704 00:15:25.704 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.704 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.704 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.963 { 00:15:25.963 "cntlid": 15, 00:15:25.963 "qid": 0, 00:15:25.963 "state": "enabled", 00:15:25.963 "thread": "nvmf_tgt_poll_group_000", 00:15:25.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.963 "listen_address": { 00:15:25.963 "trtype": "TCP", 00:15:25.963 "adrfam": "IPv4", 00:15:25.963 "traddr": "10.0.0.2", 00:15:25.963 "trsvcid": "4420" 00:15:25.963 }, 00:15:25.963 "peer_address": { 00:15:25.963 "trtype": "TCP", 00:15:25.963 "adrfam": "IPv4", 00:15:25.963 "traddr": "10.0.0.1", 00:15:25.963 "trsvcid": "45192" 00:15:25.963 }, 00:15:25.963 "auth": { 00:15:25.963 "state": "completed", 00:15:25.963 "digest": "sha256", 00:15:25.963 "dhgroup": "ffdhe2048" 00:15:25.963 } 00:15:25.963 } 00:15:25.963 ]' 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.963 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.222 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.222 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.222 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.222 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:26.222 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:26.787 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.046 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.304 00:15:27.304 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.304 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.304 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.562 { 00:15:27.562 "cntlid": 17, 00:15:27.562 "qid": 0, 00:15:27.562 "state": "enabled", 00:15:27.562 "thread": "nvmf_tgt_poll_group_000", 00:15:27.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.562 "listen_address": { 00:15:27.562 "trtype": "TCP", 00:15:27.562 "adrfam": "IPv4", 00:15:27.562 "traddr": "10.0.0.2", 00:15:27.562 "trsvcid": "4420" 00:15:27.562 }, 00:15:27.562 "peer_address": { 00:15:27.562 "trtype": "TCP", 00:15:27.562 "adrfam": "IPv4", 00:15:27.562 "traddr": "10.0.0.1", 00:15:27.562 "trsvcid": "45212" 00:15:27.562 }, 00:15:27.562 "auth": { 00:15:27.562 "state": "completed", 00:15:27.562 "digest": "sha256", 00:15:27.562 "dhgroup": "ffdhe3072" 00:15:27.562 } 00:15:27.562 } 00:15:27.562 ]' 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.562 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.821 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.821 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.821 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.821 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:27.821 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.390 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.649 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.907 00:15:28.907 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.907 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.907 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.167 { 00:15:29.167 "cntlid": 19, 00:15:29.167 "qid": 0, 00:15:29.167 "state": "enabled", 00:15:29.167 "thread": "nvmf_tgt_poll_group_000", 00:15:29.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.167 "listen_address": { 00:15:29.167 "trtype": "TCP", 00:15:29.167 "adrfam": "IPv4", 00:15:29.167 "traddr": "10.0.0.2", 00:15:29.167 "trsvcid": "4420" 00:15:29.167 }, 00:15:29.167 "peer_address": { 00:15:29.167 "trtype": "TCP", 00:15:29.167 "adrfam": "IPv4", 00:15:29.167 "traddr": "10.0.0.1", 00:15:29.167 "trsvcid": "45242" 00:15:29.167 }, 00:15:29.167 "auth": { 00:15:29.167 "state": "completed", 00:15:29.167 "digest": "sha256", 00:15:29.167 "dhgroup": "ffdhe3072" 00:15:29.167 } 00:15:29.167 } 00:15:29.167 ]' 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.167 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.427 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.427 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.427 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.427 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:29.427 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.994 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.254 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.513 00:15:30.513 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.513 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.513 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.772 { 00:15:30.772 "cntlid": 21, 00:15:30.772 "qid": 0, 00:15:30.772 "state": "enabled", 00:15:30.772 "thread": "nvmf_tgt_poll_group_000", 00:15:30.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.772 "listen_address": { 00:15:30.772 "trtype": "TCP", 00:15:30.772 "adrfam": "IPv4", 00:15:30.772 "traddr": "10.0.0.2", 00:15:30.772 "trsvcid": "4420" 00:15:30.772 }, 00:15:30.772 "peer_address": { 00:15:30.772 "trtype": "TCP", 00:15:30.772 "adrfam": "IPv4", 00:15:30.772 "traddr": "10.0.0.1", 00:15:30.772 "trsvcid": "45272" 00:15:30.772 }, 00:15:30.772 "auth": { 00:15:30.772 "state": "completed", 00:15:30.772 "digest": "sha256", 00:15:30.772 "dhgroup": "ffdhe3072" 00:15:30.772 } 00:15:30.772 } 00:15:30.772 ]' 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.772 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.030 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.030 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.030 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.030 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:31.030 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:31.597 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.598 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.598 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.598 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.856 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.857 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.115 00:15:32.115 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.115 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.115 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.374 { 00:15:32.374 "cntlid": 23, 00:15:32.374 "qid": 0, 00:15:32.374 "state": "enabled", 00:15:32.374 "thread": "nvmf_tgt_poll_group_000", 00:15:32.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.374 "listen_address": { 00:15:32.374 "trtype": "TCP", 00:15:32.374 "adrfam": "IPv4", 00:15:32.374 "traddr": "10.0.0.2", 00:15:32.374 "trsvcid": "4420" 00:15:32.374 }, 00:15:32.374 "peer_address": { 00:15:32.374 "trtype": "TCP", 00:15:32.374 "adrfam": "IPv4", 00:15:32.374 "traddr": "10.0.0.1", 00:15:32.374 "trsvcid": "45288" 00:15:32.374 }, 00:15:32.374 "auth": { 00:15:32.374 "state": "completed", 00:15:32.374 "digest": "sha256", 00:15:32.374 "dhgroup": "ffdhe3072" 00:15:32.374 } 00:15:32.374 } 00:15:32.374 ]' 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.374 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:32.633 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:33.200 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.459 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.459 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.459 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.459 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.460 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.719 00:15:33.719 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.719 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.719 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.978 { 00:15:33.978 "cntlid": 25, 00:15:33.978 "qid": 0, 00:15:33.978 "state": "enabled", 00:15:33.978 "thread": "nvmf_tgt_poll_group_000", 00:15:33.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.978 "listen_address": { 00:15:33.978 "trtype": "TCP", 00:15:33.978 "adrfam": "IPv4", 00:15:33.978 "traddr": "10.0.0.2", 00:15:33.978 "trsvcid": "4420" 00:15:33.978 }, 00:15:33.978 "peer_address": { 00:15:33.978 "trtype": "TCP", 00:15:33.978 "adrfam": "IPv4", 00:15:33.978 "traddr": "10.0.0.1", 00:15:33.978 "trsvcid": "45394" 00:15:33.978 }, 00:15:33.978 "auth": { 00:15:33.978 "state": "completed", 00:15:33.978 "digest": "sha256", 00:15:33.978 "dhgroup": "ffdhe4096" 00:15:33.978 } 00:15:33.978 } 00:15:33.978 ]' 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.978 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.237 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.237 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.237 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.237 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.237 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.237 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:34.237 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.804 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.064 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.323 00:15:35.323 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.323 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.323 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.581 { 00:15:35.581 "cntlid": 27, 00:15:35.581 "qid": 0, 00:15:35.581 "state": "enabled", 00:15:35.581 "thread": "nvmf_tgt_poll_group_000", 00:15:35.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.581 "listen_address": { 00:15:35.581 "trtype": "TCP", 00:15:35.581 "adrfam": "IPv4", 00:15:35.581 "traddr": "10.0.0.2", 00:15:35.581 "trsvcid": "4420" 00:15:35.581 }, 00:15:35.581 "peer_address": { 00:15:35.581 "trtype": "TCP", 00:15:35.581 "adrfam": "IPv4", 00:15:35.581 "traddr": "10.0.0.1", 00:15:35.581 "trsvcid": "45418" 00:15:35.581 }, 00:15:35.581 "auth": { 00:15:35.581 "state": "completed", 00:15:35.581 "digest": "sha256", 00:15:35.581 "dhgroup": "ffdhe4096" 00:15:35.581 } 00:15:35.581 } 00:15:35.581 ]' 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.581 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.841 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.841 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.841 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.841 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:35.841 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.410 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.669 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.928 00:15:36.928 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.928 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.928 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.187 { 00:15:37.187 "cntlid": 29, 00:15:37.187 "qid": 0, 00:15:37.187 "state": "enabled", 00:15:37.187 "thread": "nvmf_tgt_poll_group_000", 00:15:37.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.187 "listen_address": { 00:15:37.187 "trtype": "TCP", 00:15:37.187 "adrfam": "IPv4", 00:15:37.187 "traddr": "10.0.0.2", 00:15:37.187 "trsvcid": "4420" 00:15:37.187 }, 00:15:37.187 "peer_address": { 00:15:37.187 "trtype": "TCP", 00:15:37.187 "adrfam": "IPv4", 00:15:37.187 "traddr": "10.0.0.1", 00:15:37.187 "trsvcid": "45444" 00:15:37.187 }, 00:15:37.187 "auth": { 00:15:37.187 "state": "completed", 00:15:37.187 "digest": "sha256", 00:15:37.187 "dhgroup": "ffdhe4096" 00:15:37.187 } 00:15:37.187 } 00:15:37.187 ]' 00:15:37.187 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.445 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.703 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:37.703 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.270 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.270 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.528 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.528 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.528 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.528 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.786 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.786 { 00:15:38.786 "cntlid": 31, 00:15:38.786 "qid": 0, 00:15:38.786 "state": "enabled", 00:15:38.786 "thread": "nvmf_tgt_poll_group_000", 00:15:38.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.786 "listen_address": { 00:15:38.786 "trtype": "TCP", 00:15:38.786 "adrfam": "IPv4", 00:15:38.786 "traddr": "10.0.0.2", 00:15:38.786 "trsvcid": "4420" 00:15:38.786 }, 00:15:38.786 "peer_address": { 00:15:38.786 "trtype": "TCP", 00:15:38.786 "adrfam": "IPv4", 00:15:38.786 "traddr": "10.0.0.1", 00:15:38.786 "trsvcid": "45484" 00:15:38.786 }, 00:15:38.786 "auth": { 00:15:38.786 "state": "completed", 00:15:38.786 "digest": "sha256", 00:15:38.786 "dhgroup": "ffdhe4096" 00:15:38.786 } 00:15:38.786 } 00:15:38.786 ]' 00:15:38.786 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.045 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.304 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:39.304 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.893 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.460 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.460 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.718 { 00:15:40.718 "cntlid": 33, 00:15:40.718 "qid": 0, 00:15:40.718 "state": "enabled", 00:15:40.718 "thread": "nvmf_tgt_poll_group_000", 00:15:40.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.718 "listen_address": { 00:15:40.718 "trtype": "TCP", 00:15:40.718 "adrfam": "IPv4", 00:15:40.718 "traddr": "10.0.0.2", 00:15:40.718 "trsvcid": "4420" 00:15:40.718 }, 00:15:40.718 "peer_address": { 00:15:40.718 "trtype": "TCP", 00:15:40.718 "adrfam": "IPv4", 00:15:40.718 "traddr": "10.0.0.1", 00:15:40.718 "trsvcid": "45508" 00:15:40.718 }, 00:15:40.718 "auth": { 00:15:40.718 "state": "completed", 00:15:40.718 "digest": "sha256", 00:15:40.718 "dhgroup": "ffdhe6144" 00:15:40.718 } 00:15:40.718 } 00:15:40.718 ]' 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.718 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.977 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:40.977 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.543 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.801 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.059 00:15:42.059 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.059 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.059 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.318 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.318 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.318 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.318 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.318 { 00:15:42.318 "cntlid": 35, 00:15:42.318 "qid": 0, 00:15:42.318 "state": "enabled", 00:15:42.318 "thread": "nvmf_tgt_poll_group_000", 00:15:42.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.318 "listen_address": { 00:15:42.318 "trtype": "TCP", 00:15:42.318 "adrfam": "IPv4", 00:15:42.318 "traddr": "10.0.0.2", 00:15:42.318 "trsvcid": "4420" 00:15:42.318 }, 00:15:42.318 "peer_address": { 00:15:42.318 "trtype": "TCP", 00:15:42.318 "adrfam": "IPv4", 00:15:42.318 "traddr": "10.0.0.1", 00:15:42.318 "trsvcid": "45548" 00:15:42.318 }, 00:15:42.318 "auth": { 00:15:42.318 "state": "completed", 00:15:42.318 "digest": "sha256", 00:15:42.318 "dhgroup": "ffdhe6144" 00:15:42.318 } 00:15:42.318 } 00:15:42.318 ]' 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.318 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.577 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:42.577 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.145 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.403 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.971 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.971 { 00:15:43.971 "cntlid": 37, 00:15:43.971 "qid": 0, 00:15:43.971 "state": "enabled", 00:15:43.971 "thread": "nvmf_tgt_poll_group_000", 00:15:43.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.971 "listen_address": { 00:15:43.971 "trtype": "TCP", 00:15:43.971 "adrfam": "IPv4", 00:15:43.971 "traddr": "10.0.0.2", 00:15:43.971 "trsvcid": "4420" 00:15:43.971 }, 00:15:43.971 "peer_address": { 00:15:43.971 "trtype": "TCP", 00:15:43.971 "adrfam": "IPv4", 00:15:43.971 "traddr": "10.0.0.1", 00:15:43.971 "trsvcid": "37064" 00:15:43.971 }, 00:15:43.971 "auth": { 00:15:43.971 "state": "completed", 00:15:43.971 "digest": "sha256", 00:15:43.971 "dhgroup": "ffdhe6144" 00:15:43.971 } 00:15:43.971 } 00:15:43.971 ]' 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.971 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.230 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.230 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.230 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.230 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.230 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.230 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:44.230 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.797 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.056 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.622 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.622 { 00:15:45.622 "cntlid": 39, 00:15:45.622 "qid": 0, 00:15:45.622 "state": "enabled", 00:15:45.622 "thread": "nvmf_tgt_poll_group_000", 00:15:45.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.622 "listen_address": { 00:15:45.622 "trtype": "TCP", 00:15:45.622 "adrfam": "IPv4", 00:15:45.622 "traddr": "10.0.0.2", 00:15:45.622 "trsvcid": "4420" 00:15:45.622 }, 00:15:45.622 "peer_address": { 00:15:45.622 "trtype": "TCP", 00:15:45.622 "adrfam": "IPv4", 00:15:45.622 "traddr": "10.0.0.1", 00:15:45.622 "trsvcid": "37092" 00:15:45.622 }, 00:15:45.622 "auth": { 00:15:45.622 "state": "completed", 00:15:45.622 "digest": "sha256", 00:15:45.622 "dhgroup": "ffdhe6144" 00:15:45.622 } 00:15:45.622 } 00:15:45.622 ]' 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.622 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.881 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.881 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.881 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.881 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.881 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.139 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:46.139 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.707 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.708 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.373 00:15:47.373 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.373 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.373 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.632 { 00:15:47.632 "cntlid": 41, 00:15:47.632 "qid": 0, 00:15:47.632 "state": "enabled", 00:15:47.632 "thread": "nvmf_tgt_poll_group_000", 00:15:47.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.632 "listen_address": { 00:15:47.632 "trtype": "TCP", 00:15:47.632 "adrfam": "IPv4", 00:15:47.632 "traddr": "10.0.0.2", 00:15:47.632 "trsvcid": "4420" 00:15:47.632 }, 00:15:47.632 "peer_address": { 00:15:47.632 "trtype": "TCP", 00:15:47.632 "adrfam": "IPv4", 00:15:47.632 "traddr": "10.0.0.1", 00:15:47.632 "trsvcid": "37126" 00:15:47.632 }, 00:15:47.632 "auth": { 00:15:47.632 "state": "completed", 00:15:47.632 "digest": "sha256", 00:15:47.632 "dhgroup": "ffdhe8192" 00:15:47.632 } 00:15:47.632 } 00:15:47.632 ]' 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.632 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.891 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:47.891 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.460 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.719 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.287 00:15:49.287 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.287 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.287 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.287 { 00:15:49.287 "cntlid": 43, 00:15:49.287 "qid": 0, 00:15:49.287 "state": "enabled", 00:15:49.287 "thread": "nvmf_tgt_poll_group_000", 00:15:49.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.287 "listen_address": { 00:15:49.287 "trtype": "TCP", 00:15:49.287 "adrfam": "IPv4", 00:15:49.287 "traddr": "10.0.0.2", 00:15:49.287 "trsvcid": "4420" 00:15:49.287 }, 00:15:49.287 "peer_address": { 00:15:49.287 "trtype": "TCP", 00:15:49.287 "adrfam": "IPv4", 00:15:49.287 "traddr": "10.0.0.1", 00:15:49.287 "trsvcid": "37152" 00:15:49.287 }, 00:15:49.287 "auth": { 00:15:49.287 "state": "completed", 00:15:49.287 "digest": "sha256", 00:15:49.287 "dhgroup": "ffdhe8192" 00:15:49.287 } 00:15:49.287 } 00:15:49.287 ]' 00:15:49.287 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.546 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.805 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:49.805 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.372 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.631 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.890 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.150 { 00:15:51.150 "cntlid": 45, 00:15:51.150 "qid": 0, 00:15:51.150 "state": "enabled", 00:15:51.150 "thread": "nvmf_tgt_poll_group_000", 00:15:51.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.150 "listen_address": { 00:15:51.150 "trtype": "TCP", 00:15:51.150 "adrfam": "IPv4", 00:15:51.150 "traddr": "10.0.0.2", 00:15:51.150 "trsvcid": "4420" 00:15:51.150 }, 00:15:51.150 "peer_address": { 00:15:51.150 "trtype": "TCP", 00:15:51.150 "adrfam": "IPv4", 00:15:51.150 "traddr": "10.0.0.1", 00:15:51.150 "trsvcid": "37166" 00:15:51.150 }, 00:15:51.150 "auth": { 00:15:51.150 "state": "completed", 00:15:51.150 "digest": "sha256", 00:15:51.150 "dhgroup": "ffdhe8192" 00:15:51.150 } 00:15:51.150 } 00:15:51.150 ]' 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.150 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.409 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.409 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.409 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.409 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.409 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.668 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:51.668 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.234 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.234 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:52.234 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.234 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.234 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.235 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.803 00:15:52.803 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.803 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.803 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.062 { 00:15:53.062 "cntlid": 47, 00:15:53.062 "qid": 0, 00:15:53.062 "state": "enabled", 00:15:53.062 "thread": "nvmf_tgt_poll_group_000", 00:15:53.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.062 "listen_address": { 00:15:53.062 "trtype": "TCP", 00:15:53.062 "adrfam": "IPv4", 00:15:53.062 "traddr": "10.0.0.2", 00:15:53.062 "trsvcid": "4420" 00:15:53.062 }, 00:15:53.062 "peer_address": { 00:15:53.062 "trtype": "TCP", 00:15:53.062 "adrfam": "IPv4", 00:15:53.062 "traddr": "10.0.0.1", 00:15:53.062 "trsvcid": "37180" 00:15:53.062 }, 00:15:53.062 "auth": { 00:15:53.062 "state": "completed", 00:15:53.062 "digest": "sha256", 00:15:53.062 "dhgroup": "ffdhe8192" 00:15:53.062 } 00:15:53.062 } 00:15:53.062 ]' 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.062 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.321 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:53.321 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.890 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.149 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.150 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.150 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.150 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.150 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.408 00:15:54.408 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.408 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.408 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.667 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.667 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.667 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.667 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.667 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.667 { 00:15:54.667 "cntlid": 49, 00:15:54.667 "qid": 0, 00:15:54.667 "state": "enabled", 00:15:54.667 "thread": "nvmf_tgt_poll_group_000", 00:15:54.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.667 "listen_address": { 00:15:54.667 "trtype": "TCP", 00:15:54.667 "adrfam": "IPv4", 00:15:54.667 "traddr": "10.0.0.2", 00:15:54.667 "trsvcid": "4420" 00:15:54.667 }, 00:15:54.667 "peer_address": { 00:15:54.667 "trtype": "TCP", 00:15:54.667 "adrfam": "IPv4", 00:15:54.667 "traddr": "10.0.0.1", 00:15:54.668 "trsvcid": "55606" 00:15:54.668 }, 00:15:54.668 "auth": { 00:15:54.668 "state": "completed", 00:15:54.668 "digest": "sha384", 00:15:54.668 "dhgroup": "null" 00:15:54.668 } 00:15:54.668 } 00:15:54.668 ]' 00:15:54.668 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.668 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.668 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.668 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.668 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.926 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.926 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.926 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.926 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:54.927 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.494 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.754 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.012 00:15:56.012 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.012 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.012 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.271 { 00:15:56.271 "cntlid": 51, 00:15:56.271 "qid": 0, 00:15:56.271 "state": "enabled", 00:15:56.271 "thread": "nvmf_tgt_poll_group_000", 00:15:56.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.271 "listen_address": { 00:15:56.271 "trtype": "TCP", 00:15:56.271 "adrfam": "IPv4", 00:15:56.271 "traddr": "10.0.0.2", 00:15:56.271 "trsvcid": "4420" 00:15:56.271 }, 00:15:56.271 "peer_address": { 00:15:56.271 "trtype": "TCP", 00:15:56.271 "adrfam": "IPv4", 00:15:56.271 "traddr": "10.0.0.1", 00:15:56.271 "trsvcid": "55638" 00:15:56.271 }, 00:15:56.271 "auth": { 00:15:56.271 "state": "completed", 00:15:56.271 "digest": "sha384", 00:15:56.271 "dhgroup": "null" 00:15:56.271 } 00:15:56.271 } 00:15:56.271 ]' 00:15:56.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.271 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.530 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:56.530 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.097 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.356 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.615 00:15:57.615 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.616 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.616 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.874 { 00:15:57.874 "cntlid": 53, 00:15:57.874 "qid": 0, 00:15:57.874 "state": "enabled", 00:15:57.874 "thread": "nvmf_tgt_poll_group_000", 00:15:57.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.874 "listen_address": { 00:15:57.874 "trtype": "TCP", 00:15:57.874 "adrfam": "IPv4", 00:15:57.874 "traddr": "10.0.0.2", 00:15:57.874 "trsvcid": "4420" 00:15:57.874 }, 00:15:57.874 "peer_address": { 00:15:57.874 "trtype": "TCP", 00:15:57.874 "adrfam": "IPv4", 00:15:57.874 "traddr": "10.0.0.1", 00:15:57.874 "trsvcid": "55664" 00:15:57.874 }, 00:15:57.874 "auth": { 00:15:57.874 "state": "completed", 00:15:57.874 "digest": "sha384", 00:15:57.874 "dhgroup": "null" 00:15:57.874 } 00:15:57.874 } 00:15:57.874 ]' 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.874 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.132 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:58.132 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.699 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.957 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.958 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.216 00:15:59.216 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.216 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.216 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.475 { 00:15:59.475 "cntlid": 55, 00:15:59.475 "qid": 0, 00:15:59.475 "state": "enabled", 00:15:59.475 "thread": "nvmf_tgt_poll_group_000", 00:15:59.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.475 "listen_address": { 00:15:59.475 "trtype": "TCP", 00:15:59.475 "adrfam": "IPv4", 00:15:59.475 "traddr": "10.0.0.2", 00:15:59.475 "trsvcid": "4420" 00:15:59.475 }, 00:15:59.475 "peer_address": { 00:15:59.475 "trtype": "TCP", 00:15:59.475 "adrfam": "IPv4", 00:15:59.475 "traddr": "10.0.0.1", 00:15:59.475 "trsvcid": "55694" 00:15:59.475 }, 00:15:59.475 "auth": { 00:15:59.475 "state": "completed", 00:15:59.475 "digest": "sha384", 00:15:59.475 "dhgroup": "null" 00:15:59.475 } 00:15:59.475 } 00:15:59.475 ]' 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.475 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.734 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:15:59.734 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.302 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.561 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.819 00:16:00.819 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.819 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.819 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.078 { 00:16:01.078 "cntlid": 57, 00:16:01.078 "qid": 0, 00:16:01.078 "state": "enabled", 00:16:01.078 "thread": "nvmf_tgt_poll_group_000", 00:16:01.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.078 "listen_address": { 00:16:01.078 "trtype": "TCP", 00:16:01.078 "adrfam": "IPv4", 00:16:01.078 "traddr": "10.0.0.2", 00:16:01.078 "trsvcid": "4420" 00:16:01.078 }, 00:16:01.078 "peer_address": { 00:16:01.078 "trtype": "TCP", 00:16:01.078 "adrfam": "IPv4", 00:16:01.078 "traddr": "10.0.0.1", 00:16:01.078 "trsvcid": "55720" 00:16:01.078 }, 00:16:01.078 "auth": { 00:16:01.078 "state": "completed", 00:16:01.078 "digest": "sha384", 00:16:01.078 "dhgroup": "ffdhe2048" 00:16:01.078 } 00:16:01.078 } 00:16:01.078 ]' 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.078 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.337 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:01.337 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:01.905 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.164 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.423 00:16:02.423 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.423 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.423 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.682 { 00:16:02.682 "cntlid": 59, 00:16:02.682 "qid": 0, 00:16:02.682 "state": "enabled", 00:16:02.682 "thread": "nvmf_tgt_poll_group_000", 00:16:02.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.682 "listen_address": { 00:16:02.682 "trtype": "TCP", 00:16:02.682 "adrfam": "IPv4", 00:16:02.682 "traddr": "10.0.0.2", 00:16:02.682 "trsvcid": "4420" 00:16:02.682 }, 00:16:02.682 "peer_address": { 00:16:02.682 "trtype": "TCP", 00:16:02.682 "adrfam": "IPv4", 00:16:02.682 "traddr": "10.0.0.1", 00:16:02.682 "trsvcid": "55732" 00:16:02.682 }, 00:16:02.682 "auth": { 00:16:02.682 "state": "completed", 00:16:02.682 "digest": "sha384", 00:16:02.682 "dhgroup": "ffdhe2048" 00:16:02.682 } 00:16:02.682 } 00:16:02.682 ]' 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.682 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.941 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:02.942 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.509 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.510 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.768 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.027 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.027 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.286 { 00:16:04.286 "cntlid": 61, 00:16:04.286 "qid": 0, 00:16:04.286 "state": "enabled", 00:16:04.286 "thread": "nvmf_tgt_poll_group_000", 00:16:04.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.286 "listen_address": { 00:16:04.286 "trtype": "TCP", 00:16:04.286 "adrfam": "IPv4", 00:16:04.286 "traddr": "10.0.0.2", 00:16:04.286 "trsvcid": "4420" 00:16:04.286 }, 00:16:04.286 "peer_address": { 00:16:04.286 "trtype": "TCP", 00:16:04.286 "adrfam": "IPv4", 00:16:04.286 "traddr": "10.0.0.1", 00:16:04.286 "trsvcid": "42478" 00:16:04.286 }, 00:16:04.286 "auth": { 00:16:04.286 "state": "completed", 00:16:04.286 "digest": "sha384", 00:16:04.286 "dhgroup": "ffdhe2048" 00:16:04.286 } 00:16:04.286 } 00:16:04.286 ]' 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.286 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.544 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:04.544 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.112 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.371 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.630 00:16:05.630 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.630 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.630 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.889 { 00:16:05.889 "cntlid": 63, 00:16:05.889 "qid": 0, 00:16:05.889 "state": "enabled", 00:16:05.889 "thread": "nvmf_tgt_poll_group_000", 00:16:05.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.889 "listen_address": { 00:16:05.889 "trtype": "TCP", 00:16:05.889 "adrfam": "IPv4", 00:16:05.889 "traddr": "10.0.0.2", 00:16:05.889 "trsvcid": "4420" 00:16:05.889 }, 00:16:05.889 "peer_address": { 00:16:05.889 "trtype": "TCP", 00:16:05.889 "adrfam": "IPv4", 00:16:05.889 "traddr": "10.0.0.1", 00:16:05.889 "trsvcid": "42510" 00:16:05.889 }, 00:16:05.889 "auth": { 00:16:05.889 "state": "completed", 00:16:05.889 "digest": "sha384", 00:16:05.889 "dhgroup": "ffdhe2048" 00:16:05.889 } 00:16:05.889 } 00:16:05.889 ]' 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.889 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.148 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:06.148 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.716 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.978 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.237 00:16:07.237 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.237 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.237 16:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.237 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.237 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.237 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.237 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.496 { 00:16:07.496 "cntlid": 65, 00:16:07.496 "qid": 0, 00:16:07.496 "state": "enabled", 00:16:07.496 "thread": "nvmf_tgt_poll_group_000", 00:16:07.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.496 "listen_address": { 00:16:07.496 "trtype": "TCP", 00:16:07.496 "adrfam": "IPv4", 00:16:07.496 "traddr": "10.0.0.2", 00:16:07.496 "trsvcid": "4420" 00:16:07.496 }, 00:16:07.496 "peer_address": { 00:16:07.496 "trtype": "TCP", 00:16:07.496 "adrfam": "IPv4", 00:16:07.496 "traddr": "10.0.0.1", 00:16:07.496 "trsvcid": "42544" 00:16:07.496 }, 00:16:07.496 "auth": { 00:16:07.496 "state": "completed", 00:16:07.496 "digest": "sha384", 00:16:07.496 "dhgroup": "ffdhe3072" 00:16:07.496 } 00:16:07.496 } 00:16:07.496 ]' 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.496 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.754 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:07.754 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:08.322 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.322 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.322 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.322 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.322 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.322 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.322 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.322 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.580 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.839 00:16:08.839 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.839 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.839 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.839 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.098 { 00:16:09.098 "cntlid": 67, 00:16:09.098 "qid": 0, 00:16:09.098 "state": "enabled", 00:16:09.098 "thread": "nvmf_tgt_poll_group_000", 00:16:09.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.098 "listen_address": { 00:16:09.098 "trtype": "TCP", 00:16:09.098 "adrfam": "IPv4", 00:16:09.098 "traddr": "10.0.0.2", 00:16:09.098 "trsvcid": "4420" 00:16:09.098 }, 00:16:09.098 "peer_address": { 00:16:09.098 "trtype": "TCP", 00:16:09.098 "adrfam": "IPv4", 00:16:09.098 "traddr": "10.0.0.1", 00:16:09.098 "trsvcid": "42574" 00:16:09.098 }, 00:16:09.098 "auth": { 00:16:09.098 "state": "completed", 00:16:09.098 "digest": "sha384", 00:16:09.098 "dhgroup": "ffdhe3072" 00:16:09.098 } 00:16:09.098 } 00:16:09.098 ]' 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.098 16:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.357 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:09.357 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:09.926 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.185 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.444 00:16:10.444 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.444 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.445 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.703 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.703 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.703 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.703 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.703 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.703 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.703 { 00:16:10.703 "cntlid": 69, 00:16:10.703 "qid": 0, 00:16:10.703 "state": "enabled", 00:16:10.703 "thread": "nvmf_tgt_poll_group_000", 00:16:10.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.703 "listen_address": { 00:16:10.703 "trtype": "TCP", 00:16:10.703 "adrfam": "IPv4", 00:16:10.703 "traddr": "10.0.0.2", 00:16:10.703 "trsvcid": "4420" 00:16:10.704 }, 00:16:10.704 "peer_address": { 00:16:10.704 "trtype": "TCP", 00:16:10.704 "adrfam": "IPv4", 00:16:10.704 "traddr": "10.0.0.1", 00:16:10.704 "trsvcid": "42602" 00:16:10.704 }, 00:16:10.704 "auth": { 00:16:10.704 "state": "completed", 00:16:10.704 "digest": "sha384", 00:16:10.704 "dhgroup": "ffdhe3072" 00:16:10.704 } 00:16:10.704 } 00:16:10.704 ]' 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.704 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.963 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:10.963 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.531 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.790 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.049 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.049 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.049 { 00:16:12.049 "cntlid": 71, 00:16:12.049 "qid": 0, 00:16:12.049 "state": "enabled", 00:16:12.049 "thread": "nvmf_tgt_poll_group_000", 00:16:12.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.049 "listen_address": { 00:16:12.049 "trtype": "TCP", 00:16:12.049 "adrfam": "IPv4", 00:16:12.049 "traddr": "10.0.0.2", 00:16:12.049 "trsvcid": "4420" 00:16:12.049 }, 00:16:12.049 "peer_address": { 00:16:12.049 "trtype": "TCP", 00:16:12.049 "adrfam": "IPv4", 00:16:12.049 "traddr": "10.0.0.1", 00:16:12.049 "trsvcid": "42616" 00:16:12.049 }, 00:16:12.049 "auth": { 00:16:12.049 "state": "completed", 00:16:12.049 "digest": "sha384", 00:16:12.050 "dhgroup": "ffdhe3072" 00:16:12.050 } 00:16:12.050 } 00:16:12.050 ]' 00:16:12.050 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.308 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.308 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.308 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.308 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.309 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.309 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.309 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.567 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:12.567 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.136 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.395 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.395 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.395 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.395 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.653 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.653 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.653 { 00:16:13.653 "cntlid": 73, 00:16:13.653 "qid": 0, 00:16:13.653 "state": "enabled", 00:16:13.653 "thread": "nvmf_tgt_poll_group_000", 00:16:13.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.653 "listen_address": { 00:16:13.653 "trtype": "TCP", 00:16:13.653 "adrfam": "IPv4", 00:16:13.653 "traddr": "10.0.0.2", 00:16:13.653 "trsvcid": "4420" 00:16:13.653 }, 00:16:13.653 "peer_address": { 00:16:13.653 "trtype": "TCP", 00:16:13.653 "adrfam": "IPv4", 00:16:13.653 "traddr": "10.0.0.1", 00:16:13.653 "trsvcid": "49776" 00:16:13.653 }, 00:16:13.653 "auth": { 00:16:13.653 "state": "completed", 00:16:13.653 "digest": "sha384", 00:16:13.653 "dhgroup": "ffdhe4096" 00:16:13.653 } 00:16:13.653 } 00:16:13.653 ]' 00:16:13.910 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.910 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.910 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.910 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.910 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.911 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.911 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.911 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.169 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:14.169 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.736 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.995 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.996 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.996 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.255 00:16:15.255 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.255 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.255 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.514 { 00:16:15.514 "cntlid": 75, 00:16:15.514 "qid": 0, 00:16:15.514 "state": "enabled", 00:16:15.514 "thread": "nvmf_tgt_poll_group_000", 00:16:15.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.514 "listen_address": { 00:16:15.514 "trtype": "TCP", 00:16:15.514 "adrfam": "IPv4", 00:16:15.514 "traddr": "10.0.0.2", 00:16:15.514 "trsvcid": "4420" 00:16:15.514 }, 00:16:15.514 "peer_address": { 00:16:15.514 "trtype": "TCP", 00:16:15.514 "adrfam": "IPv4", 00:16:15.514 "traddr": "10.0.0.1", 00:16:15.514 "trsvcid": "49794" 00:16:15.514 }, 00:16:15.514 "auth": { 00:16:15.514 "state": "completed", 00:16:15.514 "digest": "sha384", 00:16:15.514 "dhgroup": "ffdhe4096" 00:16:15.514 } 00:16:15.514 } 00:16:15.514 ]' 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.514 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.774 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:15.774 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.341 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.600 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.859 00:16:16.859 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.859 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.859 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.118 { 00:16:17.118 "cntlid": 77, 00:16:17.118 "qid": 0, 00:16:17.118 "state": "enabled", 00:16:17.118 "thread": "nvmf_tgt_poll_group_000", 00:16:17.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.118 "listen_address": { 00:16:17.118 "trtype": "TCP", 00:16:17.118 "adrfam": "IPv4", 00:16:17.118 "traddr": "10.0.0.2", 00:16:17.118 "trsvcid": "4420" 00:16:17.118 }, 00:16:17.118 "peer_address": { 00:16:17.118 "trtype": "TCP", 00:16:17.118 "adrfam": "IPv4", 00:16:17.118 "traddr": "10.0.0.1", 00:16:17.118 "trsvcid": "49812" 00:16:17.118 }, 00:16:17.118 "auth": { 00:16:17.118 "state": "completed", 00:16:17.118 "digest": "sha384", 00:16:17.118 "dhgroup": "ffdhe4096" 00:16:17.118 } 00:16:17.118 } 00:16:17.118 ]' 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.118 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.377 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:17.378 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.945 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.204 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.462 00:16:18.463 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.463 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.463 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.723 { 00:16:18.723 "cntlid": 79, 00:16:18.723 "qid": 0, 00:16:18.723 "state": "enabled", 00:16:18.723 "thread": "nvmf_tgt_poll_group_000", 00:16:18.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.723 "listen_address": { 00:16:18.723 "trtype": "TCP", 00:16:18.723 "adrfam": "IPv4", 00:16:18.723 "traddr": "10.0.0.2", 00:16:18.723 "trsvcid": "4420" 00:16:18.723 }, 00:16:18.723 "peer_address": { 00:16:18.723 "trtype": "TCP", 00:16:18.723 "adrfam": "IPv4", 00:16:18.723 "traddr": "10.0.0.1", 00:16:18.723 "trsvcid": "49836" 00:16:18.723 }, 00:16:18.723 "auth": { 00:16:18.723 "state": "completed", 00:16:18.723 "digest": "sha384", 00:16:18.723 "dhgroup": "ffdhe4096" 00:16:18.723 } 00:16:18.723 } 00:16:18.723 ]' 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.723 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.983 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:18.983 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.552 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.817 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.075 00:16:20.334 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.334 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.334 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.334 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.334 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.334 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.334 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.334 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.334 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.334 { 00:16:20.334 "cntlid": 81, 00:16:20.334 "qid": 0, 00:16:20.334 "state": "enabled", 00:16:20.334 "thread": "nvmf_tgt_poll_group_000", 00:16:20.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.334 "listen_address": { 00:16:20.334 "trtype": "TCP", 00:16:20.334 "adrfam": "IPv4", 00:16:20.334 "traddr": "10.0.0.2", 00:16:20.334 "trsvcid": "4420" 00:16:20.334 }, 00:16:20.334 "peer_address": { 00:16:20.334 "trtype": "TCP", 00:16:20.334 "adrfam": "IPv4", 00:16:20.334 "traddr": "10.0.0.1", 00:16:20.334 "trsvcid": "49862" 00:16:20.334 }, 00:16:20.334 "auth": { 00:16:20.334 "state": "completed", 00:16:20.335 "digest": "sha384", 00:16:20.335 "dhgroup": "ffdhe6144" 00:16:20.335 } 00:16:20.335 } 00:16:20.335 ]' 00:16:20.335 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.593 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.852 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:20.852 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.420 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.987 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.987 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.987 { 00:16:21.987 "cntlid": 83, 00:16:21.987 "qid": 0, 00:16:21.987 "state": "enabled", 00:16:21.987 "thread": "nvmf_tgt_poll_group_000", 00:16:21.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.987 "listen_address": { 00:16:21.987 "trtype": "TCP", 00:16:21.987 "adrfam": "IPv4", 00:16:21.987 "traddr": "10.0.0.2", 00:16:21.987 "trsvcid": "4420" 00:16:21.987 }, 00:16:21.987 "peer_address": { 00:16:21.987 "trtype": "TCP", 00:16:21.987 "adrfam": "IPv4", 00:16:21.987 "traddr": "10.0.0.1", 00:16:21.987 "trsvcid": "49892" 00:16:21.987 }, 00:16:21.987 "auth": { 00:16:21.987 "state": "completed", 00:16:21.987 "digest": "sha384", 00:16:21.987 "dhgroup": "ffdhe6144" 00:16:21.987 } 00:16:21.987 } 00:16:21.987 ]' 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.246 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.504 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:22.505 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.072 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.331 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.589 00:16:23.589 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.589 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.589 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.848 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.848 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.848 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.848 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.848 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.848 { 00:16:23.848 "cntlid": 85, 00:16:23.848 "qid": 0, 00:16:23.848 "state": "enabled", 00:16:23.848 "thread": "nvmf_tgt_poll_group_000", 00:16:23.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.848 "listen_address": { 00:16:23.848 "trtype": "TCP", 00:16:23.848 "adrfam": "IPv4", 00:16:23.848 "traddr": "10.0.0.2", 00:16:23.848 "trsvcid": "4420" 00:16:23.848 }, 00:16:23.848 "peer_address": { 00:16:23.848 "trtype": "TCP", 00:16:23.848 "adrfam": "IPv4", 00:16:23.848 "traddr": "10.0.0.1", 00:16:23.848 "trsvcid": "54952" 00:16:23.848 }, 00:16:23.848 "auth": { 00:16:23.848 "state": "completed", 00:16:23.848 "digest": "sha384", 00:16:23.848 "dhgroup": "ffdhe6144" 00:16:23.848 } 00:16:23.848 } 00:16:23.848 ]' 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.849 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.107 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:24.107 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.725 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.025 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.284 00:16:25.284 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.284 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.284 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.543 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.543 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.543 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.543 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.543 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.543 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.543 { 00:16:25.543 "cntlid": 87, 00:16:25.543 "qid": 0, 00:16:25.543 "state": "enabled", 00:16:25.543 "thread": "nvmf_tgt_poll_group_000", 00:16:25.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.543 "listen_address": { 00:16:25.543 "trtype": "TCP", 00:16:25.543 "adrfam": "IPv4", 00:16:25.543 "traddr": "10.0.0.2", 00:16:25.543 "trsvcid": "4420" 00:16:25.543 }, 00:16:25.543 "peer_address": { 00:16:25.543 "trtype": "TCP", 00:16:25.543 "adrfam": "IPv4", 00:16:25.543 "traddr": "10.0.0.1", 00:16:25.544 "trsvcid": "54972" 00:16:25.544 }, 00:16:25.544 "auth": { 00:16:25.544 "state": "completed", 00:16:25.544 "digest": "sha384", 00:16:25.544 "dhgroup": "ffdhe6144" 00:16:25.544 } 00:16:25.544 } 00:16:25.544 ]' 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.544 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.803 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:25.803 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.385 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.643 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.209 00:16:27.209 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.209 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.209 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.467 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.467 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.467 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.467 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.467 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.467 { 00:16:27.467 "cntlid": 89, 00:16:27.467 "qid": 0, 00:16:27.467 "state": "enabled", 00:16:27.467 "thread": "nvmf_tgt_poll_group_000", 00:16:27.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.467 "listen_address": { 00:16:27.467 "trtype": "TCP", 00:16:27.467 "adrfam": "IPv4", 00:16:27.467 "traddr": "10.0.0.2", 00:16:27.468 "trsvcid": "4420" 00:16:27.468 }, 00:16:27.468 "peer_address": { 00:16:27.468 "trtype": "TCP", 00:16:27.468 "adrfam": "IPv4", 00:16:27.468 "traddr": "10.0.0.1", 00:16:27.468 "trsvcid": "55004" 00:16:27.468 }, 00:16:27.468 "auth": { 00:16:27.468 "state": "completed", 00:16:27.468 "digest": "sha384", 00:16:27.468 "dhgroup": "ffdhe8192" 00:16:27.468 } 00:16:27.468 } 00:16:27.468 ]' 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.468 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.726 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:27.726 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.293 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.550 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:28.550 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.550 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.551 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.118 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.118 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.119 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.119 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.119 { 00:16:29.119 "cntlid": 91, 00:16:29.119 "qid": 0, 00:16:29.119 "state": "enabled", 00:16:29.119 "thread": "nvmf_tgt_poll_group_000", 00:16:29.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.119 "listen_address": { 00:16:29.119 "trtype": "TCP", 00:16:29.119 "adrfam": "IPv4", 00:16:29.119 "traddr": "10.0.0.2", 00:16:29.119 "trsvcid": "4420" 00:16:29.119 }, 00:16:29.119 "peer_address": { 00:16:29.119 "trtype": "TCP", 00:16:29.119 "adrfam": "IPv4", 00:16:29.119 "traddr": "10.0.0.1", 00:16:29.119 "trsvcid": "55024" 00:16:29.119 }, 00:16:29.119 "auth": { 00:16:29.119 "state": "completed", 00:16:29.119 "digest": "sha384", 00:16:29.119 "dhgroup": "ffdhe8192" 00:16:29.119 } 00:16:29.119 } 00:16:29.119 ]' 00:16:29.119 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.377 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.377 16:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.377 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.377 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.377 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.377 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.377 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.635 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:29.635 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.203 16:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.462 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.720 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.979 { 00:16:30.979 "cntlid": 93, 00:16:30.979 "qid": 0, 00:16:30.979 "state": "enabled", 00:16:30.979 "thread": "nvmf_tgt_poll_group_000", 00:16:30.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.979 "listen_address": { 00:16:30.979 "trtype": "TCP", 00:16:30.979 "adrfam": "IPv4", 00:16:30.979 "traddr": "10.0.0.2", 00:16:30.979 "trsvcid": "4420" 00:16:30.979 }, 00:16:30.979 "peer_address": { 00:16:30.979 "trtype": "TCP", 00:16:30.979 "adrfam": "IPv4", 00:16:30.979 "traddr": "10.0.0.1", 00:16:30.979 "trsvcid": "55046" 00:16:30.979 }, 00:16:30.979 "auth": { 00:16:30.979 "state": "completed", 00:16:30.979 "digest": "sha384", 00:16:30.979 "dhgroup": "ffdhe8192" 00:16:30.979 } 00:16:30.979 } 00:16:30.979 ]' 00:16:30.979 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.237 16:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.496 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:31.496 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.064 16:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.631 00:16:32.631 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.631 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.631 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.890 { 00:16:32.890 "cntlid": 95, 00:16:32.890 "qid": 0, 00:16:32.890 "state": "enabled", 00:16:32.890 "thread": "nvmf_tgt_poll_group_000", 00:16:32.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.890 "listen_address": { 00:16:32.890 "trtype": "TCP", 00:16:32.890 "adrfam": "IPv4", 00:16:32.890 "traddr": "10.0.0.2", 00:16:32.890 "trsvcid": "4420" 00:16:32.890 }, 00:16:32.890 "peer_address": { 00:16:32.890 "trtype": "TCP", 00:16:32.890 "adrfam": "IPv4", 00:16:32.890 "traddr": "10.0.0.1", 00:16:32.890 "trsvcid": "55088" 00:16:32.890 }, 00:16:32.890 "auth": { 00:16:32.890 "state": "completed", 00:16:32.890 "digest": "sha384", 00:16:32.890 "dhgroup": "ffdhe8192" 00:16:32.890 } 00:16:32.890 } 00:16:32.890 ]' 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.890 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.150 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:33.150 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.718 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.977 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.236 00:16:34.236 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.236 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.236 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.495 { 00:16:34.495 "cntlid": 97, 00:16:34.495 "qid": 0, 00:16:34.495 "state": "enabled", 00:16:34.495 "thread": "nvmf_tgt_poll_group_000", 00:16:34.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.495 "listen_address": { 00:16:34.495 "trtype": "TCP", 00:16:34.495 "adrfam": "IPv4", 00:16:34.495 "traddr": "10.0.0.2", 00:16:34.495 "trsvcid": "4420" 00:16:34.495 }, 00:16:34.495 "peer_address": { 00:16:34.495 "trtype": "TCP", 00:16:34.495 "adrfam": "IPv4", 00:16:34.495 "traddr": "10.0.0.1", 00:16:34.495 "trsvcid": "53294" 00:16:34.495 }, 00:16:34.495 "auth": { 00:16:34.495 "state": "completed", 00:16:34.495 "digest": "sha512", 00:16:34.495 "dhgroup": "null" 00:16:34.495 } 00:16:34.495 } 00:16:34.495 ]' 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.495 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.754 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:34.754 16:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.321 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.580 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.839 00:16:35.839 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.839 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.839 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.098 { 00:16:36.098 "cntlid": 99, 00:16:36.098 "qid": 0, 00:16:36.098 "state": "enabled", 00:16:36.098 "thread": "nvmf_tgt_poll_group_000", 00:16:36.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.098 "listen_address": { 00:16:36.098 "trtype": "TCP", 00:16:36.098 "adrfam": "IPv4", 00:16:36.098 "traddr": "10.0.0.2", 00:16:36.098 "trsvcid": "4420" 00:16:36.098 }, 00:16:36.098 "peer_address": { 00:16:36.098 "trtype": "TCP", 00:16:36.098 "adrfam": "IPv4", 00:16:36.098 "traddr": "10.0.0.1", 00:16:36.098 "trsvcid": "53328" 00:16:36.098 }, 00:16:36.098 "auth": { 00:16:36.098 "state": "completed", 00:16:36.098 "digest": "sha512", 00:16:36.098 "dhgroup": "null" 00:16:36.098 } 00:16:36.098 } 00:16:36.098 ]' 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.098 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.099 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.358 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.358 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.358 16:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.358 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:36.358 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.925 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.184 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.443 00:16:37.443 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.443 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.443 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.702 { 00:16:37.702 "cntlid": 101, 00:16:37.702 "qid": 0, 00:16:37.702 "state": "enabled", 00:16:37.702 "thread": "nvmf_tgt_poll_group_000", 00:16:37.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.702 "listen_address": { 00:16:37.702 "trtype": "TCP", 00:16:37.702 "adrfam": "IPv4", 00:16:37.702 "traddr": "10.0.0.2", 00:16:37.702 "trsvcid": "4420" 00:16:37.702 }, 00:16:37.702 "peer_address": { 00:16:37.702 "trtype": "TCP", 00:16:37.702 "adrfam": "IPv4", 00:16:37.702 "traddr": "10.0.0.1", 00:16:37.702 "trsvcid": "53356" 00:16:37.702 }, 00:16:37.702 "auth": { 00:16:37.702 "state": "completed", 00:16:37.702 "digest": "sha512", 00:16:37.702 "dhgroup": "null" 00:16:37.702 } 00:16:37.702 } 00:16:37.702 ]' 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.702 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.961 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:37.961 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.529 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.788 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.047 00:16:39.047 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.047 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.047 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.306 { 00:16:39.306 "cntlid": 103, 00:16:39.306 "qid": 0, 00:16:39.306 "state": "enabled", 00:16:39.306 "thread": "nvmf_tgt_poll_group_000", 00:16:39.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.306 "listen_address": { 00:16:39.306 "trtype": "TCP", 00:16:39.306 "adrfam": "IPv4", 00:16:39.306 "traddr": "10.0.0.2", 00:16:39.306 "trsvcid": "4420" 00:16:39.306 }, 00:16:39.306 "peer_address": { 00:16:39.306 "trtype": "TCP", 00:16:39.306 "adrfam": "IPv4", 00:16:39.306 "traddr": "10.0.0.1", 00:16:39.306 "trsvcid": "53382" 00:16:39.306 }, 00:16:39.306 "auth": { 00:16:39.306 "state": "completed", 00:16:39.306 "digest": "sha512", 00:16:39.306 "dhgroup": "null" 00:16:39.306 } 00:16:39.306 } 00:16:39.306 ]' 00:16:39.306 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.306 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.565 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:39.565 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:40.133 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.133 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.133 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.134 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.134 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.134 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.134 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.134 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:40.134 16:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.393 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.651 00:16:40.651 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.651 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.651 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.910 { 00:16:40.910 "cntlid": 105, 00:16:40.910 "qid": 0, 00:16:40.910 "state": "enabled", 00:16:40.910 "thread": "nvmf_tgt_poll_group_000", 00:16:40.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.910 "listen_address": { 00:16:40.910 "trtype": "TCP", 00:16:40.910 "adrfam": "IPv4", 00:16:40.910 "traddr": "10.0.0.2", 00:16:40.910 "trsvcid": "4420" 00:16:40.910 }, 00:16:40.910 "peer_address": { 00:16:40.910 "trtype": "TCP", 00:16:40.910 "adrfam": "IPv4", 00:16:40.910 "traddr": "10.0.0.1", 00:16:40.910 "trsvcid": "53412" 00:16:40.910 }, 00:16:40.910 "auth": { 00:16:40.910 "state": "completed", 00:16:40.910 "digest": "sha512", 00:16:40.910 "dhgroup": "ffdhe2048" 00:16:40.910 } 00:16:40.910 } 00:16:40.910 ]' 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.910 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.168 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:41.168 16:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.735 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.994 16:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.253 00:16:42.253 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.253 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.253 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.512 { 00:16:42.512 "cntlid": 107, 00:16:42.512 "qid": 0, 00:16:42.512 "state": "enabled", 00:16:42.512 "thread": "nvmf_tgt_poll_group_000", 00:16:42.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.512 "listen_address": { 00:16:42.512 "trtype": "TCP", 00:16:42.512 "adrfam": "IPv4", 00:16:42.512 "traddr": "10.0.0.2", 00:16:42.512 "trsvcid": "4420" 00:16:42.512 }, 00:16:42.512 "peer_address": { 00:16:42.512 "trtype": "TCP", 00:16:42.512 "adrfam": "IPv4", 00:16:42.512 "traddr": "10.0.0.1", 00:16:42.512 "trsvcid": "53436" 00:16:42.512 }, 00:16:42.512 "auth": { 00:16:42.512 "state": "completed", 00:16:42.512 "digest": "sha512", 00:16:42.512 "dhgroup": "ffdhe2048" 00:16:42.512 } 00:16:42.512 } 00:16:42.512 ]' 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.512 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.771 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:42.771 16:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.339 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.598 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:43.598 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.598 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.599 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.858 00:16:43.858 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.858 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.858 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.118 { 00:16:44.118 "cntlid": 109, 00:16:44.118 "qid": 0, 00:16:44.118 "state": "enabled", 00:16:44.118 "thread": "nvmf_tgt_poll_group_000", 00:16:44.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.118 "listen_address": { 00:16:44.118 "trtype": "TCP", 00:16:44.118 "adrfam": "IPv4", 00:16:44.118 "traddr": "10.0.0.2", 00:16:44.118 "trsvcid": "4420" 00:16:44.118 }, 00:16:44.118 "peer_address": { 00:16:44.118 "trtype": "TCP", 00:16:44.118 "adrfam": "IPv4", 00:16:44.118 "traddr": "10.0.0.1", 00:16:44.118 "trsvcid": "48988" 00:16:44.118 }, 00:16:44.118 "auth": { 00:16:44.118 "state": "completed", 00:16:44.118 "digest": "sha512", 00:16:44.118 "dhgroup": "ffdhe2048" 00:16:44.118 } 00:16:44.118 } 00:16:44.118 ]' 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.118 16:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.376 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:44.376 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.944 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.203 16:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.462 00:16:45.462 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.462 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.462 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.720 { 00:16:45.720 "cntlid": 111, 00:16:45.720 "qid": 0, 00:16:45.720 "state": "enabled", 00:16:45.720 "thread": "nvmf_tgt_poll_group_000", 00:16:45.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.720 "listen_address": { 00:16:45.720 "trtype": "TCP", 00:16:45.720 "adrfam": "IPv4", 00:16:45.720 "traddr": "10.0.0.2", 00:16:45.720 "trsvcid": "4420" 00:16:45.720 }, 00:16:45.720 "peer_address": { 00:16:45.720 "trtype": "TCP", 00:16:45.720 "adrfam": "IPv4", 00:16:45.720 "traddr": "10.0.0.1", 00:16:45.720 "trsvcid": "49000" 00:16:45.720 }, 00:16:45.720 "auth": { 00:16:45.720 "state": "completed", 00:16:45.720 "digest": "sha512", 00:16:45.720 "dhgroup": "ffdhe2048" 00:16:45.720 } 00:16:45.720 } 00:16:45.720 ]' 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.720 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.979 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:45.979 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.546 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.805 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.806 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.806 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.806 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.806 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.806 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.064 00:16:47.064 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.064 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.064 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.322 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.322 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.322 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.322 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.322 { 00:16:47.322 "cntlid": 113, 00:16:47.322 "qid": 0, 00:16:47.322 "state": "enabled", 00:16:47.322 "thread": "nvmf_tgt_poll_group_000", 00:16:47.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.322 "listen_address": { 00:16:47.322 "trtype": "TCP", 00:16:47.322 "adrfam": "IPv4", 00:16:47.322 "traddr": "10.0.0.2", 00:16:47.322 "trsvcid": "4420" 00:16:47.322 }, 00:16:47.322 "peer_address": { 00:16:47.322 "trtype": "TCP", 00:16:47.322 "adrfam": "IPv4", 00:16:47.322 "traddr": "10.0.0.1", 00:16:47.322 "trsvcid": "49038" 00:16:47.322 }, 00:16:47.322 "auth": { 00:16:47.322 "state": "completed", 00:16:47.322 "digest": "sha512", 00:16:47.322 "dhgroup": "ffdhe3072" 00:16:47.322 } 00:16:47.322 } 00:16:47.322 ]' 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.322 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.581 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:47.581 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.149 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.407 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.408 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.408 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.408 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.408 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.666 00:16:48.666 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.666 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.666 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.924 { 00:16:48.924 "cntlid": 115, 00:16:48.924 "qid": 0, 00:16:48.924 "state": "enabled", 00:16:48.924 "thread": "nvmf_tgt_poll_group_000", 00:16:48.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.924 "listen_address": { 00:16:48.924 "trtype": "TCP", 00:16:48.924 "adrfam": "IPv4", 00:16:48.924 "traddr": "10.0.0.2", 00:16:48.924 "trsvcid": "4420" 00:16:48.924 }, 00:16:48.924 "peer_address": { 00:16:48.924 "trtype": "TCP", 00:16:48.924 "adrfam": "IPv4", 00:16:48.924 "traddr": "10.0.0.1", 00:16:48.924 "trsvcid": "49070" 00:16:48.924 }, 00:16:48.924 "auth": { 00:16:48.924 "state": "completed", 00:16:48.924 "digest": "sha512", 00:16:48.924 "dhgroup": "ffdhe3072" 00:16:48.924 } 00:16:48.924 } 00:16:48.924 ]' 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.924 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.183 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:49.183 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.750 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.009 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.268 00:16:50.268 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.268 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.268 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.527 { 00:16:50.527 "cntlid": 117, 00:16:50.527 "qid": 0, 00:16:50.527 "state": "enabled", 00:16:50.527 "thread": "nvmf_tgt_poll_group_000", 00:16:50.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.527 "listen_address": { 00:16:50.527 "trtype": "TCP", 00:16:50.527 "adrfam": "IPv4", 00:16:50.527 "traddr": "10.0.0.2", 00:16:50.527 "trsvcid": "4420" 00:16:50.527 }, 00:16:50.527 "peer_address": { 00:16:50.527 "trtype": "TCP", 00:16:50.527 "adrfam": "IPv4", 00:16:50.527 "traddr": "10.0.0.1", 00:16:50.527 "trsvcid": "49106" 00:16:50.527 }, 00:16:50.527 "auth": { 00:16:50.527 "state": "completed", 00:16:50.527 "digest": "sha512", 00:16:50.527 "dhgroup": "ffdhe3072" 00:16:50.527 } 00:16:50.527 } 00:16:50.527 ]' 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.527 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.786 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:50.786 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.354 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.613 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.872 00:16:51.872 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.872 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.872 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.131 { 00:16:52.131 "cntlid": 119, 00:16:52.131 "qid": 0, 00:16:52.131 "state": "enabled", 00:16:52.131 "thread": "nvmf_tgt_poll_group_000", 00:16:52.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.131 "listen_address": { 00:16:52.131 "trtype": "TCP", 00:16:52.131 "adrfam": "IPv4", 00:16:52.131 "traddr": "10.0.0.2", 00:16:52.131 "trsvcid": "4420" 00:16:52.131 }, 00:16:52.131 "peer_address": { 00:16:52.131 "trtype": "TCP", 00:16:52.131 "adrfam": "IPv4", 00:16:52.131 "traddr": "10.0.0.1", 00:16:52.131 "trsvcid": "49132" 00:16:52.131 }, 00:16:52.131 "auth": { 00:16:52.131 "state": "completed", 00:16:52.131 "digest": "sha512", 00:16:52.131 "dhgroup": "ffdhe3072" 00:16:52.131 } 00:16:52.131 } 00:16:52.131 ]' 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.131 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.390 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:52.390 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.957 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.216 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.475 00:16:53.475 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.475 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.475 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.733 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.733 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.733 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.733 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.733 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.733 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.733 { 00:16:53.733 "cntlid": 121, 00:16:53.733 "qid": 0, 00:16:53.734 "state": "enabled", 00:16:53.734 "thread": "nvmf_tgt_poll_group_000", 00:16:53.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.734 "listen_address": { 00:16:53.734 "trtype": "TCP", 00:16:53.734 "adrfam": "IPv4", 00:16:53.734 "traddr": "10.0.0.2", 00:16:53.734 "trsvcid": "4420" 00:16:53.734 }, 00:16:53.734 "peer_address": { 00:16:53.734 "trtype": "TCP", 00:16:53.734 "adrfam": "IPv4", 00:16:53.734 "traddr": "10.0.0.1", 00:16:53.734 "trsvcid": "38394" 00:16:53.734 }, 00:16:53.734 "auth": { 00:16:53.734 "state": "completed", 00:16:53.734 "digest": "sha512", 00:16:53.734 "dhgroup": "ffdhe4096" 00:16:53.734 } 00:16:53.734 } 00:16:53.734 ]' 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.734 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.992 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:53.992 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.560 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.818 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.819 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.078 00:16:55.078 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.078 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.078 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.337 { 00:16:55.337 "cntlid": 123, 00:16:55.337 "qid": 0, 00:16:55.337 "state": "enabled", 00:16:55.337 "thread": "nvmf_tgt_poll_group_000", 00:16:55.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.337 "listen_address": { 00:16:55.337 "trtype": "TCP", 00:16:55.337 "adrfam": "IPv4", 00:16:55.337 "traddr": "10.0.0.2", 00:16:55.337 "trsvcid": "4420" 00:16:55.337 }, 00:16:55.337 "peer_address": { 00:16:55.337 "trtype": "TCP", 00:16:55.337 "adrfam": "IPv4", 00:16:55.337 "traddr": "10.0.0.1", 00:16:55.337 "trsvcid": "38428" 00:16:55.337 }, 00:16:55.337 "auth": { 00:16:55.337 "state": "completed", 00:16:55.337 "digest": "sha512", 00:16:55.337 "dhgroup": "ffdhe4096" 00:16:55.337 } 00:16:55.337 } 00:16:55.337 ]' 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.337 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.596 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:55.596 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.164 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.165 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.423 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.424 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.424 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.424 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.424 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.424 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.424 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.682 00:16:56.682 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.682 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.682 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.941 { 00:16:56.941 "cntlid": 125, 00:16:56.941 "qid": 0, 00:16:56.941 "state": "enabled", 00:16:56.941 "thread": "nvmf_tgt_poll_group_000", 00:16:56.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.941 "listen_address": { 00:16:56.941 "trtype": "TCP", 00:16:56.941 "adrfam": "IPv4", 00:16:56.941 "traddr": "10.0.0.2", 00:16:56.941 "trsvcid": "4420" 00:16:56.941 }, 00:16:56.941 "peer_address": { 00:16:56.941 "trtype": "TCP", 00:16:56.941 "adrfam": "IPv4", 00:16:56.941 "traddr": "10.0.0.1", 00:16:56.941 "trsvcid": "38464" 00:16:56.941 }, 00:16:56.941 "auth": { 00:16:56.941 "state": "completed", 00:16:56.941 "digest": "sha512", 00:16:56.941 "dhgroup": "ffdhe4096" 00:16:56.941 } 00:16:56.941 } 00:16:56.941 ]' 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.941 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.200 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.200 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.200 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.200 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.200 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.200 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:57.200 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:16:57.767 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.026 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.027 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.286 00:16:58.286 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.286 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.286 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.543 { 00:16:58.543 "cntlid": 127, 00:16:58.543 "qid": 0, 00:16:58.543 "state": "enabled", 00:16:58.543 "thread": "nvmf_tgt_poll_group_000", 00:16:58.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.543 "listen_address": { 00:16:58.543 "trtype": "TCP", 00:16:58.543 "adrfam": "IPv4", 00:16:58.543 "traddr": "10.0.0.2", 00:16:58.543 "trsvcid": "4420" 00:16:58.543 }, 00:16:58.543 "peer_address": { 00:16:58.543 "trtype": "TCP", 00:16:58.543 "adrfam": "IPv4", 00:16:58.543 "traddr": "10.0.0.1", 00:16:58.543 "trsvcid": "38494" 00:16:58.543 }, 00:16:58.543 "auth": { 00:16:58.543 "state": "completed", 00:16:58.543 "digest": "sha512", 00:16:58.543 "dhgroup": "ffdhe4096" 00:16:58.543 } 00:16:58.543 } 00:16:58.543 ]' 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.543 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:58.802 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:16:59.370 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.370 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.370 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.370 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.629 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.197 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.197 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.197 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.197 { 00:17:00.197 "cntlid": 129, 00:17:00.197 "qid": 0, 00:17:00.197 "state": "enabled", 00:17:00.197 "thread": "nvmf_tgt_poll_group_000", 00:17:00.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.197 "listen_address": { 00:17:00.197 "trtype": "TCP", 00:17:00.197 "adrfam": "IPv4", 00:17:00.197 "traddr": "10.0.0.2", 00:17:00.197 "trsvcid": "4420" 00:17:00.197 }, 00:17:00.197 "peer_address": { 00:17:00.197 "trtype": "TCP", 00:17:00.197 "adrfam": "IPv4", 00:17:00.197 "traddr": "10.0.0.1", 00:17:00.197 "trsvcid": "38514" 00:17:00.197 }, 00:17:00.197 "auth": { 00:17:00.197 "state": "completed", 00:17:00.197 "digest": "sha512", 00:17:00.197 "dhgroup": "ffdhe6144" 00:17:00.197 } 00:17:00.197 } 00:17:00.197 ]' 00:17:00.197 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.456 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.715 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:17:00.715 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:01.283 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:01.283 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:01.283 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.283 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.284 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:01.284 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.284 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.284 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.284 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.284 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.543 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.543 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.543 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.543 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.801 00:17:01.801 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.801 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.801 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.131 { 00:17:02.131 "cntlid": 131, 00:17:02.131 "qid": 0, 00:17:02.131 "state": "enabled", 00:17:02.131 "thread": "nvmf_tgt_poll_group_000", 00:17:02.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.131 "listen_address": { 00:17:02.131 "trtype": "TCP", 00:17:02.131 "adrfam": "IPv4", 00:17:02.131 "traddr": "10.0.0.2", 00:17:02.131 "trsvcid": "4420" 00:17:02.131 }, 00:17:02.131 "peer_address": { 00:17:02.131 "trtype": "TCP", 00:17:02.131 "adrfam": "IPv4", 00:17:02.131 "traddr": "10.0.0.1", 00:17:02.131 "trsvcid": "38546" 00:17:02.131 }, 00:17:02.131 "auth": { 00:17:02.131 "state": "completed", 00:17:02.131 "digest": "sha512", 00:17:02.131 "dhgroup": "ffdhe6144" 00:17:02.131 } 00:17:02.131 } 00:17:02.131 ]' 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.131 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.449 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:17:02.449 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.017 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.018 16:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.585 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.585 { 00:17:03.585 "cntlid": 133, 00:17:03.585 "qid": 0, 00:17:03.585 "state": "enabled", 00:17:03.585 "thread": "nvmf_tgt_poll_group_000", 00:17:03.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.585 "listen_address": { 00:17:03.585 "trtype": "TCP", 00:17:03.585 "adrfam": "IPv4", 00:17:03.585 "traddr": "10.0.0.2", 00:17:03.585 "trsvcid": "4420" 00:17:03.585 }, 00:17:03.585 "peer_address": { 00:17:03.585 "trtype": "TCP", 00:17:03.585 "adrfam": "IPv4", 00:17:03.585 "traddr": "10.0.0.1", 00:17:03.585 "trsvcid": "36766" 00:17:03.585 }, 00:17:03.585 "auth": { 00:17:03.585 "state": "completed", 00:17:03.585 "digest": "sha512", 00:17:03.585 "dhgroup": "ffdhe6144" 00:17:03.585 } 00:17:03.585 } 00:17:03.585 ]' 00:17:03.585 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.844 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.102 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:17:04.102 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.670 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.929 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.929 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.929 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.929 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.188 00:17:05.188 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.188 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.188 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.447 { 00:17:05.447 "cntlid": 135, 00:17:05.447 "qid": 0, 00:17:05.447 "state": "enabled", 00:17:05.447 "thread": "nvmf_tgt_poll_group_000", 00:17:05.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.447 "listen_address": { 00:17:05.447 "trtype": "TCP", 00:17:05.447 "adrfam": "IPv4", 00:17:05.447 "traddr": "10.0.0.2", 00:17:05.447 "trsvcid": "4420" 00:17:05.447 }, 00:17:05.447 "peer_address": { 00:17:05.447 "trtype": "TCP", 00:17:05.447 "adrfam": "IPv4", 00:17:05.447 "traddr": "10.0.0.1", 00:17:05.447 "trsvcid": "36792" 00:17:05.447 }, 00:17:05.447 "auth": { 00:17:05.447 "state": "completed", 00:17:05.447 "digest": "sha512", 00:17:05.447 "dhgroup": "ffdhe6144" 00:17:05.447 } 00:17:05.447 } 00:17:05.447 ]' 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.447 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.706 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:05.706 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.273 16:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.532 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:06.532 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.532 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.532 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.532 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.533 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.100 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.100 { 00:17:07.100 "cntlid": 137, 00:17:07.100 "qid": 0, 00:17:07.100 "state": "enabled", 00:17:07.100 "thread": "nvmf_tgt_poll_group_000", 00:17:07.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.100 "listen_address": { 00:17:07.100 "trtype": "TCP", 00:17:07.100 "adrfam": "IPv4", 00:17:07.100 "traddr": "10.0.0.2", 00:17:07.100 "trsvcid": "4420" 00:17:07.100 }, 00:17:07.100 "peer_address": { 00:17:07.100 "trtype": "TCP", 00:17:07.100 "adrfam": "IPv4", 00:17:07.100 "traddr": "10.0.0.1", 00:17:07.100 "trsvcid": "36828" 00:17:07.100 }, 00:17:07.100 "auth": { 00:17:07.100 "state": "completed", 00:17:07.100 "digest": "sha512", 00:17:07.100 "dhgroup": "ffdhe8192" 00:17:07.100 } 00:17:07.100 } 00:17:07.100 ]' 00:17:07.100 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.359 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.359 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.359 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.359 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.359 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.359 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.359 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.617 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:17:07.617 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.185 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.445 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.704 00:17:08.704 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.704 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.704 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.963 { 00:17:08.963 "cntlid": 139, 00:17:08.963 "qid": 0, 00:17:08.963 "state": "enabled", 00:17:08.963 "thread": "nvmf_tgt_poll_group_000", 00:17:08.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.963 "listen_address": { 00:17:08.963 "trtype": "TCP", 00:17:08.963 "adrfam": "IPv4", 00:17:08.963 "traddr": "10.0.0.2", 00:17:08.963 "trsvcid": "4420" 00:17:08.963 }, 00:17:08.963 "peer_address": { 00:17:08.963 "trtype": "TCP", 00:17:08.963 "adrfam": "IPv4", 00:17:08.963 "traddr": "10.0.0.1", 00:17:08.963 "trsvcid": "36850" 00:17:08.963 }, 00:17:08.963 "auth": { 00:17:08.963 "state": "completed", 00:17:08.963 "digest": "sha512", 00:17:08.963 "dhgroup": "ffdhe8192" 00:17:08.963 } 00:17:08.963 } 00:17:08.963 ]' 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.963 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.222 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.222 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.222 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.222 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.222 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.481 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:17:09.481 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: --dhchap-ctrl-secret DHHC-1:02:ZmVlMWNhZWExY2I4ZjgwN2U2MDI0N2NkMWNjNGRlNGIxYWFlZjBkNjk1NTUzZTY300Qmew==: 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.048 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.049 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.307 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.307 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.307 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.308 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.566 00:17:10.566 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.566 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.566 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.825 { 00:17:10.825 "cntlid": 141, 00:17:10.825 "qid": 0, 00:17:10.825 "state": "enabled", 00:17:10.825 "thread": "nvmf_tgt_poll_group_000", 00:17:10.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.825 "listen_address": { 00:17:10.825 "trtype": "TCP", 00:17:10.825 "adrfam": "IPv4", 00:17:10.825 "traddr": "10.0.0.2", 00:17:10.825 "trsvcid": "4420" 00:17:10.825 }, 00:17:10.825 "peer_address": { 00:17:10.825 "trtype": "TCP", 00:17:10.825 "adrfam": "IPv4", 00:17:10.825 "traddr": "10.0.0.1", 00:17:10.825 "trsvcid": "36888" 00:17:10.825 }, 00:17:10.825 "auth": { 00:17:10.825 "state": "completed", 00:17:10.825 "digest": "sha512", 00:17:10.825 "dhgroup": "ffdhe8192" 00:17:10.825 } 00:17:10.825 } 00:17:10.825 ]' 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.825 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.083 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.083 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.083 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.083 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.083 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.341 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:17:11.341 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:01:ODJhM2I1ZWNkYzFiYWU5NGI1MzQwNjAxOWFiN2MxZTDn0P68: 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.909 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.476 00:17:12.476 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.477 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.477 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.735 { 00:17:12.735 "cntlid": 143, 00:17:12.735 "qid": 0, 00:17:12.735 "state": "enabled", 00:17:12.735 "thread": "nvmf_tgt_poll_group_000", 00:17:12.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.735 "listen_address": { 00:17:12.735 "trtype": "TCP", 00:17:12.735 "adrfam": "IPv4", 00:17:12.735 "traddr": "10.0.0.2", 00:17:12.735 "trsvcid": "4420" 00:17:12.735 }, 00:17:12.735 "peer_address": { 00:17:12.735 "trtype": "TCP", 00:17:12.735 "adrfam": "IPv4", 00:17:12.735 "traddr": "10.0.0.1", 00:17:12.735 "trsvcid": "36914" 00:17:12.735 }, 00:17:12.735 "auth": { 00:17:12.735 "state": "completed", 00:17:12.735 "digest": "sha512", 00:17:12.735 "dhgroup": "ffdhe8192" 00:17:12.735 } 00:17:12.735 } 00:17:12.735 ]' 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.735 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.993 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:12.993 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:13.560 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.820 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.388 00:17:14.388 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.388 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.388 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.647 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.648 { 00:17:14.648 "cntlid": 145, 00:17:14.648 "qid": 0, 00:17:14.648 "state": "enabled", 00:17:14.648 "thread": "nvmf_tgt_poll_group_000", 00:17:14.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.648 "listen_address": { 00:17:14.648 "trtype": "TCP", 00:17:14.648 "adrfam": "IPv4", 00:17:14.648 "traddr": "10.0.0.2", 00:17:14.648 "trsvcid": "4420" 00:17:14.648 }, 00:17:14.648 "peer_address": { 00:17:14.648 "trtype": "TCP", 00:17:14.648 "adrfam": "IPv4", 00:17:14.648 "traddr": "10.0.0.1", 00:17:14.648 "trsvcid": "37258" 00:17:14.648 }, 00:17:14.648 "auth": { 00:17:14.648 "state": "completed", 00:17:14.648 "digest": "sha512", 00:17:14.648 "dhgroup": "ffdhe8192" 00:17:14.648 } 00:17:14.648 } 00:17:14.648 ]' 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.648 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.907 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:17:14.907 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MGM3YTQ2YzJmMjY4ZTliZGFiZWIzMzllZWM1NmVmNzM4Y2FmOWNmNzk1ZmNkNTA0wgoFTw==: --dhchap-ctrl-secret DHHC-1:03:N2YzZDhjYTA0Y2M3NWI4ZTEyY2U0ZTNkMWM4ODI0YjBjZTY1MmUzNTYzNjYxY2IyODdkMzQ5ZDA3NzEwZDk1Yh1NZro=: 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:15.475 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:16.044 request: 00:17:16.044 { 00:17:16.044 "name": "nvme0", 00:17:16.044 "trtype": "tcp", 00:17:16.044 "traddr": "10.0.0.2", 00:17:16.044 "adrfam": "ipv4", 00:17:16.044 "trsvcid": "4420", 00:17:16.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.044 "prchk_reftag": false, 00:17:16.044 "prchk_guard": false, 00:17:16.044 "hdgst": false, 00:17:16.044 "ddgst": false, 00:17:16.044 "dhchap_key": "key2", 00:17:16.044 "allow_unrecognized_csi": false, 00:17:16.044 "method": "bdev_nvme_attach_controller", 00:17:16.044 "req_id": 1 00:17:16.044 } 00:17:16.044 Got JSON-RPC error response 00:17:16.044 response: 00:17:16.044 { 00:17:16.044 "code": -5, 00:17:16.044 "message": "Input/output error" 00:17:16.044 } 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.044 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.612 request: 00:17:16.612 { 00:17:16.612 "name": "nvme0", 00:17:16.612 "trtype": "tcp", 00:17:16.612 "traddr": "10.0.0.2", 00:17:16.612 "adrfam": "ipv4", 00:17:16.612 "trsvcid": "4420", 00:17:16.612 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.612 "prchk_reftag": false, 00:17:16.612 "prchk_guard": false, 00:17:16.612 "hdgst": false, 00:17:16.612 "ddgst": false, 00:17:16.612 "dhchap_key": "key1", 00:17:16.612 "dhchap_ctrlr_key": "ckey2", 00:17:16.612 "allow_unrecognized_csi": false, 00:17:16.612 "method": "bdev_nvme_attach_controller", 00:17:16.612 "req_id": 1 00:17:16.612 } 00:17:16.612 Got JSON-RPC error response 00:17:16.612 response: 00:17:16.612 { 00:17:16.612 "code": -5, 00:17:16.612 "message": "Input/output error" 00:17:16.612 } 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.612 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.613 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.872 request: 00:17:16.872 { 00:17:16.872 "name": "nvme0", 00:17:16.872 "trtype": "tcp", 00:17:16.872 "traddr": "10.0.0.2", 00:17:16.872 "adrfam": "ipv4", 00:17:16.872 "trsvcid": "4420", 00:17:16.872 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.872 "prchk_reftag": false, 00:17:16.872 "prchk_guard": false, 00:17:16.872 "hdgst": false, 00:17:16.872 "ddgst": false, 00:17:16.872 "dhchap_key": "key1", 00:17:16.872 "dhchap_ctrlr_key": "ckey1", 00:17:16.872 "allow_unrecognized_csi": false, 00:17:16.872 "method": "bdev_nvme_attach_controller", 00:17:16.872 "req_id": 1 00:17:16.872 } 00:17:16.872 Got JSON-RPC error response 00:17:16.872 response: 00:17:16.872 { 00:17:16.872 "code": -5, 00:17:16.872 "message": "Input/output error" 00:17:16.872 } 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2715476 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2715476 ']' 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2715476 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.872 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715476 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715476' 00:17:17.131 killing process with pid 2715476 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2715476 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2715476 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2737919 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2737919 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2737919 ']' 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.131 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2737919 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2737919 ']' 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.390 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.650 null0 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pAB 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.fr9 ]] 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fr9 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.g8J 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.650 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.vZ7 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vZ7 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vUj 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Nox ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nox 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L6T 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.910 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.477 nvme0n1 00:17:18.477 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.477 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.477 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.735 { 00:17:18.735 "cntlid": 1, 00:17:18.735 "qid": 0, 00:17:18.735 "state": "enabled", 00:17:18.735 "thread": "nvmf_tgt_poll_group_000", 00:17:18.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.735 "listen_address": { 00:17:18.735 "trtype": "TCP", 00:17:18.735 "adrfam": "IPv4", 00:17:18.735 "traddr": "10.0.0.2", 00:17:18.735 "trsvcid": "4420" 00:17:18.735 }, 00:17:18.735 "peer_address": { 00:17:18.735 "trtype": "TCP", 00:17:18.735 "adrfam": "IPv4", 00:17:18.735 "traddr": "10.0.0.1", 00:17:18.735 "trsvcid": "37328" 00:17:18.735 }, 00:17:18.735 "auth": { 00:17:18.735 "state": "completed", 00:17:18.735 "digest": "sha512", 00:17:18.735 "dhgroup": "ffdhe8192" 00:17:18.735 } 00:17:18.735 } 00:17:18.735 ]' 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.735 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.994 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.994 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.994 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.994 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.994 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.994 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.253 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:19.253 16:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:19.822 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.081 request: 00:17:20.081 { 00:17:20.081 "name": "nvme0", 00:17:20.081 "trtype": "tcp", 00:17:20.081 "traddr": "10.0.0.2", 00:17:20.081 "adrfam": "ipv4", 00:17:20.081 "trsvcid": "4420", 00:17:20.081 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.081 "prchk_reftag": false, 00:17:20.081 "prchk_guard": false, 00:17:20.081 "hdgst": false, 00:17:20.081 "ddgst": false, 00:17:20.081 "dhchap_key": "key3", 00:17:20.081 "allow_unrecognized_csi": false, 00:17:20.081 "method": "bdev_nvme_attach_controller", 00:17:20.081 "req_id": 1 00:17:20.081 } 00:17:20.081 Got JSON-RPC error response 00:17:20.081 response: 00:17:20.081 { 00:17:20.081 "code": -5, 00:17:20.081 "message": "Input/output error" 00:17:20.081 } 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:20.081 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.340 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.599 request: 00:17:20.599 { 00:17:20.599 "name": "nvme0", 00:17:20.599 "trtype": "tcp", 00:17:20.599 "traddr": "10.0.0.2", 00:17:20.599 "adrfam": "ipv4", 00:17:20.599 "trsvcid": "4420", 00:17:20.599 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.599 "prchk_reftag": false, 00:17:20.599 "prchk_guard": false, 00:17:20.599 "hdgst": false, 00:17:20.599 "ddgst": false, 00:17:20.599 "dhchap_key": "key3", 00:17:20.599 "allow_unrecognized_csi": false, 00:17:20.599 "method": "bdev_nvme_attach_controller", 00:17:20.599 "req_id": 1 00:17:20.599 } 00:17:20.599 Got JSON-RPC error response 00:17:20.599 response: 00:17:20.599 { 00:17:20.599 "code": -5, 00:17:20.599 "message": "Input/output error" 00:17:20.599 } 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.599 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.600 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.859 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.860 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.119 request: 00:17:21.119 { 00:17:21.119 "name": "nvme0", 00:17:21.119 "trtype": "tcp", 00:17:21.119 "traddr": "10.0.0.2", 00:17:21.119 "adrfam": "ipv4", 00:17:21.119 "trsvcid": "4420", 00:17:21.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.119 "prchk_reftag": false, 00:17:21.119 "prchk_guard": false, 00:17:21.119 "hdgst": false, 00:17:21.119 "ddgst": false, 00:17:21.119 "dhchap_key": "key0", 00:17:21.119 "dhchap_ctrlr_key": "key1", 00:17:21.119 "allow_unrecognized_csi": false, 00:17:21.119 "method": "bdev_nvme_attach_controller", 00:17:21.119 "req_id": 1 00:17:21.119 } 00:17:21.119 Got JSON-RPC error response 00:17:21.119 response: 00:17:21.119 { 00:17:21.119 "code": -5, 00:17:21.119 "message": "Input/output error" 00:17:21.119 } 00:17:21.119 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.119 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.119 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.119 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.119 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:21.120 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:21.120 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:21.378 nvme0n1 00:17:21.379 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:21.379 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:21.379 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.637 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.637 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.637 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:21.896 16:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:22.464 nvme0n1 00:17:22.464 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:22.464 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:22.464 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.723 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:22.982 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.982 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:22.982 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: --dhchap-ctrl-secret DHHC-1:03:OWU5MzEzNmYwMjkyMGUwMDg2NmM5MDYxMGJlN2U3NTM2ODM4Y2RlMjFiYzdiN2ZjNTgzZmQ0MGM4MTMxYjRiNHJaENo=: 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.550 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:23.809 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:24.378 request: 00:17:24.378 { 00:17:24.378 "name": "nvme0", 00:17:24.378 "trtype": "tcp", 00:17:24.378 "traddr": "10.0.0.2", 00:17:24.378 "adrfam": "ipv4", 00:17:24.378 "trsvcid": "4420", 00:17:24.378 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:24.378 "prchk_reftag": false, 00:17:24.378 "prchk_guard": false, 00:17:24.378 "hdgst": false, 00:17:24.378 "ddgst": false, 00:17:24.378 "dhchap_key": "key1", 00:17:24.378 "allow_unrecognized_csi": false, 00:17:24.378 "method": "bdev_nvme_attach_controller", 00:17:24.378 "req_id": 1 00:17:24.378 } 00:17:24.378 Got JSON-RPC error response 00:17:24.378 response: 00:17:24.378 { 00:17:24.378 "code": -5, 00:17:24.378 "message": "Input/output error" 00:17:24.378 } 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.378 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.946 nvme0n1 00:17:24.946 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:24.946 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:24.946 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.206 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.206 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.206 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:25.465 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:25.724 nvme0n1 00:17:25.724 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:25.724 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:25.724 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.724 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.724 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.724 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: '' 2s 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: ]] 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWVjYzJkNjRhZTQ4YjJkYTQzYzdlYzdmMGUzNGYzMzmLGQm7: 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:25.984 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: 2s 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: ]] 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmQxODg0MDMwZDJjYTdhZGFiZTg4YWZhZmMxYzUzNjEzYjM5MWMzY2NkMDBkNjg5Tb85Xw==: 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:28.520 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:30.426 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:30.995 nvme0n1 00:17:30.995 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.995 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.995 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.995 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.995 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.995 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:31.563 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:31.822 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:31.822 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:31.822 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:32.081 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:32.082 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:32.082 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.082 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:32.082 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.082 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:32.082 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:32.650 request: 00:17:32.650 { 00:17:32.650 "name": "nvme0", 00:17:32.650 "dhchap_key": "key1", 00:17:32.650 "dhchap_ctrlr_key": "key3", 00:17:32.650 "method": "bdev_nvme_set_keys", 00:17:32.650 "req_id": 1 00:17:32.650 } 00:17:32.650 Got JSON-RPC error response 00:17:32.650 response: 00:17:32.650 { 00:17:32.650 "code": -13, 00:17:32.650 "message": "Permission denied" 00:17:32.650 } 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:32.650 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:33.586 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:33.586 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:33.586 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.845 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:34.780 nvme0n1 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:34.780 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:35.039 request: 00:17:35.039 { 00:17:35.039 "name": "nvme0", 00:17:35.039 "dhchap_key": "key2", 00:17:35.039 "dhchap_ctrlr_key": "key0", 00:17:35.039 "method": "bdev_nvme_set_keys", 00:17:35.039 "req_id": 1 00:17:35.039 } 00:17:35.039 Got JSON-RPC error response 00:17:35.039 response: 00:17:35.039 { 00:17:35.039 "code": -13, 00:17:35.039 "message": "Permission denied" 00:17:35.039 } 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:35.039 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.297 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:35.297 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:36.232 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:36.232 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:36.232 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2715588 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2715588 ']' 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2715588 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715588 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715588' 00:17:36.491 killing process with pid 2715588 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2715588 00:17:36.491 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2715588 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.059 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.059 rmmod nvme_tcp 00:17:37.060 rmmod nvme_fabrics 00:17:37.060 rmmod nvme_keyring 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2737919 ']' 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2737919 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2737919 ']' 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2737919 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737919 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737919' 00:17:37.060 killing process with pid 2737919 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2737919 00:17:37.060 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2737919 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.319 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.224 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.224 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pAB /tmp/spdk.key-sha256.g8J /tmp/spdk.key-sha384.vUj /tmp/spdk.key-sha512.L6T /tmp/spdk.key-sha512.fr9 /tmp/spdk.key-sha384.vZ7 /tmp/spdk.key-sha256.Nox '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:39.224 00:17:39.224 real 2m34.003s 00:17:39.224 user 5m55.219s 00:17:39.224 sys 0m24.402s 00:17:39.224 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.224 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.224 ************************************ 00:17:39.224 END TEST nvmf_auth_target 00:17:39.224 ************************************ 00:17:39.224 16:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:39.224 16:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:39.224 16:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:39.225 16:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.225 16:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.225 ************************************ 00:17:39.225 START TEST nvmf_bdevio_no_huge 00:17:39.225 ************************************ 00:17:39.225 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:39.484 * Looking for test storage... 00:17:39.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.484 --rc genhtml_branch_coverage=1 00:17:39.484 --rc genhtml_function_coverage=1 00:17:39.484 --rc genhtml_legend=1 00:17:39.484 --rc geninfo_all_blocks=1 00:17:39.484 --rc geninfo_unexecuted_blocks=1 00:17:39.484 00:17:39.484 ' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.484 --rc genhtml_branch_coverage=1 00:17:39.484 --rc genhtml_function_coverage=1 00:17:39.484 --rc genhtml_legend=1 00:17:39.484 --rc geninfo_all_blocks=1 00:17:39.484 --rc geninfo_unexecuted_blocks=1 00:17:39.484 00:17:39.484 ' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.484 --rc genhtml_branch_coverage=1 00:17:39.484 --rc genhtml_function_coverage=1 00:17:39.484 --rc genhtml_legend=1 00:17:39.484 --rc geninfo_all_blocks=1 00:17:39.484 --rc geninfo_unexecuted_blocks=1 00:17:39.484 00:17:39.484 ' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.484 --rc genhtml_branch_coverage=1 00:17:39.484 --rc genhtml_function_coverage=1 00:17:39.484 --rc genhtml_legend=1 00:17:39.484 --rc geninfo_all_blocks=1 00:17:39.484 --rc geninfo_unexecuted_blocks=1 00:17:39.484 00:17:39.484 ' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:39.484 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.485 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:46.058 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:46.058 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:46.058 Found net devices under 0000:86:00.0: cvl_0_0 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.058 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:46.059 Found net devices under 0000:86:00.1: cvl_0_1 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.059 16:10:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:17:46.059 00:17:46.059 --- 10.0.0.2 ping statistics --- 00:17:46.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.059 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:17:46.059 00:17:46.059 --- 10.0.0.1 ping statistics --- 00:17:46.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.059 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2744790 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2744790 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2744790 ']' 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.059 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.059 [2024-11-20 16:10:46.203756] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:46.059 [2024-11-20 16:10:46.203806] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:46.059 [2024-11-20 16:10:46.286687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.059 [2024-11-20 16:10:46.332689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.059 [2024-11-20 16:10:46.332724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.059 [2024-11-20 16:10:46.332731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.059 [2024-11-20 16:10:46.332737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.059 [2024-11-20 16:10:46.332742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.059 [2024-11-20 16:10:46.333963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:46.059 [2024-11-20 16:10:46.334052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:46.059 [2024-11-20 16:10:46.334159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.059 [2024-11-20 16:10:46.334159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 [2024-11-20 16:10:47.089192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 Malloc0 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 [2024-11-20 16:10:47.125447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.321 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:46.322 { 00:17:46.322 "params": { 00:17:46.322 "name": "Nvme$subsystem", 00:17:46.322 "trtype": "$TEST_TRANSPORT", 00:17:46.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:46.322 "adrfam": "ipv4", 00:17:46.322 "trsvcid": "$NVMF_PORT", 00:17:46.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:46.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:46.322 "hdgst": ${hdgst:-false}, 00:17:46.322 "ddgst": ${ddgst:-false} 00:17:46.322 }, 00:17:46.322 "method": "bdev_nvme_attach_controller" 00:17:46.322 } 00:17:46.322 EOF 00:17:46.322 )") 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:46.322 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:46.322 "params": { 00:17:46.322 "name": "Nvme1", 00:17:46.322 "trtype": "tcp", 00:17:46.322 "traddr": "10.0.0.2", 00:17:46.322 "adrfam": "ipv4", 00:17:46.322 "trsvcid": "4420", 00:17:46.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:46.322 "hdgst": false, 00:17:46.322 "ddgst": false 00:17:46.322 }, 00:17:46.322 "method": "bdev_nvme_attach_controller" 00:17:46.322 }' 00:17:46.627 [2024-11-20 16:10:47.176364] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:46.628 [2024-11-20 16:10:47.176410] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2744927 ] 00:17:46.628 [2024-11-20 16:10:47.256276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:46.628 [2024-11-20 16:10:47.305689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.628 [2024-11-20 16:10:47.305795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.628 [2024-11-20 16:10:47.305795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.933 I/O targets: 00:17:46.933 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:46.933 00:17:46.933 00:17:46.933 CUnit - A unit testing framework for C - Version 2.1-3 00:17:46.933 http://cunit.sourceforge.net/ 00:17:46.933 00:17:46.933 00:17:46.933 Suite: bdevio tests on: Nvme1n1 00:17:46.933 Test: blockdev write read block ...passed 00:17:46.933 Test: blockdev write zeroes read block ...passed 00:17:46.933 Test: blockdev write zeroes read no split ...passed 00:17:46.933 Test: blockdev write zeroes read split ...passed 00:17:46.933 Test: blockdev write zeroes read split partial ...passed 00:17:46.933 Test: blockdev reset ...[2024-11-20 16:10:47.755641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:46.933 [2024-11-20 16:10:47.755708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e920 (9): Bad file descriptor 00:17:47.224 [2024-11-20 16:10:47.771701] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:47.224 passed 00:17:47.224 Test: blockdev write read 8 blocks ...passed 00:17:47.224 Test: blockdev write read size > 128k ...passed 00:17:47.224 Test: blockdev write read invalid size ...passed 00:17:47.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:47.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:47.224 Test: blockdev write read max offset ...passed 00:17:47.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:47.224 Test: blockdev writev readv 8 blocks ...passed 00:17:47.224 Test: blockdev writev readv 30 x 1block ...passed 00:17:47.224 Test: blockdev writev readv block ...passed 00:17:47.224 Test: blockdev writev readv size > 128k ...passed 00:17:47.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:47.224 Test: blockdev comparev and writev ...[2024-11-20 16:10:47.981587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.981614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.981629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.981637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.981876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.981898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.981905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.982178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.982188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.982199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.982206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.982440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.982449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:47.224 [2024-11-20 16:10:47.982461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.224 [2024-11-20 16:10:47.982468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:47.224 passed 00:17:47.483 Test: blockdev nvme passthru rw ...passed 00:17:47.483 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:10:48.064270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.483 [2024-11-20 16:10:48.064286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:47.483 [2024-11-20 16:10:48.064401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.483 [2024-11-20 16:10:48.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:47.483 [2024-11-20 16:10:48.064511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.483 [2024-11-20 16:10:48.064520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:47.483 [2024-11-20 16:10:48.064620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.483 [2024-11-20 16:10:48.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:47.483 passed 00:17:47.483 Test: blockdev nvme admin passthru ...passed 00:17:47.483 Test: blockdev copy ...passed 00:17:47.483 00:17:47.483 Run Summary: Type Total Ran Passed Failed Inactive 00:17:47.483 suites 1 1 n/a 0 0 00:17:47.483 tests 23 23 23 0 0 00:17:47.483 asserts 152 152 152 0 n/a 00:17:47.483 00:17:47.483 Elapsed time = 0.979 seconds 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:47.744 rmmod nvme_tcp 00:17:47.744 rmmod nvme_fabrics 00:17:47.744 rmmod nvme_keyring 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2744790 ']' 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2744790 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2744790 ']' 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2744790 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744790 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744790' 00:17:47.744 killing process with pid 2744790 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2744790 00:17:47.744 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2744790 00:17:48.003 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:48.003 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:48.003 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:48.003 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:48.003 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.004 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:50.540 00:17:50.540 real 0m10.847s 00:17:50.540 user 0m13.849s 00:17:50.540 sys 0m5.376s 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:50.540 ************************************ 00:17:50.540 END TEST nvmf_bdevio_no_huge 00:17:50.540 ************************************ 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:50.540 ************************************ 00:17:50.540 START TEST nvmf_tls 00:17:50.540 ************************************ 00:17:50.540 16:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:50.540 * Looking for test storage... 00:17:50.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.540 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:50.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.541 --rc genhtml_branch_coverage=1 00:17:50.541 --rc genhtml_function_coverage=1 00:17:50.541 --rc genhtml_legend=1 00:17:50.541 --rc geninfo_all_blocks=1 00:17:50.541 --rc geninfo_unexecuted_blocks=1 00:17:50.541 00:17:50.541 ' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:50.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.541 --rc genhtml_branch_coverage=1 00:17:50.541 --rc genhtml_function_coverage=1 00:17:50.541 --rc genhtml_legend=1 00:17:50.541 --rc geninfo_all_blocks=1 00:17:50.541 --rc geninfo_unexecuted_blocks=1 00:17:50.541 00:17:50.541 ' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:50.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.541 --rc genhtml_branch_coverage=1 00:17:50.541 --rc genhtml_function_coverage=1 00:17:50.541 --rc genhtml_legend=1 00:17:50.541 --rc geninfo_all_blocks=1 00:17:50.541 --rc geninfo_unexecuted_blocks=1 00:17:50.541 00:17:50.541 ' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:50.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.541 --rc genhtml_branch_coverage=1 00:17:50.541 --rc genhtml_function_coverage=1 00:17:50.541 --rc genhtml_legend=1 00:17:50.541 --rc geninfo_all_blocks=1 00:17:50.541 --rc geninfo_unexecuted_blocks=1 00:17:50.541 00:17:50.541 ' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:50.541 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:50.542 16:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:57.115 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:57.115 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.115 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:57.116 Found net devices under 0000:86:00.0: cvl_0_0 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:57.116 Found net devices under 0000:86:00.1: cvl_0_1 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.116 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:57.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:17:57.116 00:17:57.116 --- 10.0.0.2 ping statistics --- 00:17:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.116 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:17:57.116 00:17:57.116 --- 10.0.0.1 ping statistics --- 00:17:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.116 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2748708 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2748708 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2748708 ']' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 [2024-11-20 16:10:57.202351] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:57.116 [2024-11-20 16:10:57.202402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.116 [2024-11-20 16:10:57.282593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.116 [2024-11-20 16:10:57.323413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.116 [2024-11-20 16:10:57.323451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.116 [2024-11-20 16:10:57.323461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.116 [2024-11-20 16:10:57.323466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.116 [2024-11-20 16:10:57.323471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.116 [2024-11-20 16:10:57.324055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:57.116 true 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:57.116 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:57.376 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:57.376 16:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:57.376 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:57.376 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:57.376 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:57.634 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:57.634 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:57.892 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:57.892 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:57.892 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:57.892 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:58.152 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:58.152 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:58.152 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:58.152 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:58.152 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:58.410 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:58.410 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:58.410 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:58.669 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:58.669 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.qYSdpV8dyp 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Lzmjv981ND 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.qYSdpV8dyp 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Lzmjv981ND 00:17:58.928 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:59.188 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:59.446 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.qYSdpV8dyp 00:17:59.446 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qYSdpV8dyp 00:17:59.446 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.446 [2024-11-20 16:11:00.257953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.446 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.706 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:59.964 [2024-11-20 16:11:00.626902] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.964 [2024-11-20 16:11:00.627125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.964 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:00.224 malloc0 00:18:00.224 16:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:00.224 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qYSdpV8dyp 00:18:00.483 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:00.741 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qYSdpV8dyp 00:18:10.719 Initializing NVMe Controllers 00:18:10.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.720 Initialization complete. Launching workers. 00:18:10.720 ======================================================== 00:18:10.720 Latency(us) 00:18:10.720 Device Information : IOPS MiB/s Average min max 00:18:10.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16388.65 64.02 3905.25 885.87 5267.05 00:18:10.720 ======================================================== 00:18:10.720 Total : 16388.65 64.02 3905.25 885.87 5267.05 00:18:10.720 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYSdpV8dyp 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qYSdpV8dyp 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2751159 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2751159 /var/tmp/bdevperf.sock 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2751159 ']' 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.720 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.979 [2024-11-20 16:11:11.566895] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:10.979 [2024-11-20 16:11:11.566941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751159 ] 00:18:10.979 [2024-11-20 16:11:11.642516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.979 [2024-11-20 16:11:11.683386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.979 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.979 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.979 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qYSdpV8dyp 00:18:11.238 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.498 [2024-11-20 16:11:12.155982] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.498 TLSTESTn1 00:18:11.498 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:11.757 Running I/O for 10 seconds... 00:18:13.629 5205.00 IOPS, 20.33 MiB/s [2024-11-20T15:11:15.402Z] 5279.00 IOPS, 20.62 MiB/s [2024-11-20T15:11:16.778Z] 5324.33 IOPS, 20.80 MiB/s [2024-11-20T15:11:17.714Z] 5228.75 IOPS, 20.42 MiB/s [2024-11-20T15:11:18.651Z] 5101.40 IOPS, 19.93 MiB/s [2024-11-20T15:11:19.586Z] 5047.00 IOPS, 19.71 MiB/s [2024-11-20T15:11:20.522Z] 5012.86 IOPS, 19.58 MiB/s [2024-11-20T15:11:21.458Z] 4977.25 IOPS, 19.44 MiB/s [2024-11-20T15:11:22.395Z] 4950.33 IOPS, 19.34 MiB/s [2024-11-20T15:11:22.654Z] 4922.90 IOPS, 19.23 MiB/s 00:18:21.818 Latency(us) 00:18:21.818 [2024-11-20T15:11:22.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.818 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.818 Verification LBA range: start 0x0 length 0x2000 00:18:21.818 TLSTESTn1 : 10.02 4926.95 19.25 0.00 0.00 25941.26 5755.77 36016.31 00:18:21.818 [2024-11-20T15:11:22.655Z] =================================================================================================================== 00:18:21.818 [2024-11-20T15:11:22.655Z] Total : 4926.95 19.25 0.00 0.00 25941.26 5755.77 36016.31 00:18:21.818 { 00:18:21.818 "results": [ 00:18:21.818 { 00:18:21.818 "job": "TLSTESTn1", 00:18:21.818 "core_mask": "0x4", 00:18:21.818 "workload": "verify", 00:18:21.818 "status": "finished", 00:18:21.818 "verify_range": { 00:18:21.818 "start": 0, 00:18:21.818 "length": 8192 00:18:21.818 }, 00:18:21.818 "queue_depth": 128, 00:18:21.818 "io_size": 4096, 00:18:21.818 "runtime": 10.017557, 00:18:21.818 "iops": 4926.949754316347, 00:18:21.818 "mibps": 19.24589747779823, 00:18:21.818 "io_failed": 0, 00:18:21.818 "io_timeout": 0, 00:18:21.818 "avg_latency_us": 25941.264632959475, 00:18:21.818 "min_latency_us": 5755.770434782608, 00:18:21.818 "max_latency_us": 36016.30608695652 00:18:21.818 } 00:18:21.818 ], 00:18:21.818 "core_count": 1 00:18:21.818 } 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2751159 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2751159 ']' 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2751159 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751159 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751159' 00:18:21.818 killing process with pid 2751159 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2751159 00:18:21.818 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.818 00:18:21.818 Latency(us) 00:18:21.818 [2024-11-20T15:11:22.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.818 [2024-11-20T15:11:22.655Z] =================================================================================================================== 00:18:21.818 [2024-11-20T15:11:22.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2751159 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lzmjv981ND 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lzmjv981ND 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lzmjv981ND 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Lzmjv981ND 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2752862 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2752862 /var/tmp/bdevperf.sock 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2752862 ']' 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.818 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.076 [2024-11-20 16:11:22.676864] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:22.076 [2024-11-20 16:11:22.676917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752862 ] 00:18:22.076 [2024-11-20 16:11:22.748827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.076 [2024-11-20 16:11:22.787335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.076 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.076 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.076 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Lzmjv981ND 00:18:22.334 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.592 [2024-11-20 16:11:23.255582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.592 [2024-11-20 16:11:23.262521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:22.592 [2024-11-20 16:11:23.263073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcae170 (107): Transport endpoint is not connected 00:18:22.593 [2024-11-20 16:11:23.264066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcae170 (9): Bad file descriptor 00:18:22.593 [2024-11-20 16:11:23.265068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:22.593 [2024-11-20 16:11:23.265077] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:22.593 [2024-11-20 16:11:23.265085] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:22.593 [2024-11-20 16:11:23.265095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:22.593 request: 00:18:22.593 { 00:18:22.593 "name": "TLSTEST", 00:18:22.593 "trtype": "tcp", 00:18:22.593 "traddr": "10.0.0.2", 00:18:22.593 "adrfam": "ipv4", 00:18:22.593 "trsvcid": "4420", 00:18:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.593 "prchk_reftag": false, 00:18:22.593 "prchk_guard": false, 00:18:22.593 "hdgst": false, 00:18:22.593 "ddgst": false, 00:18:22.593 "psk": "key0", 00:18:22.593 "allow_unrecognized_csi": false, 00:18:22.593 "method": "bdev_nvme_attach_controller", 00:18:22.593 "req_id": 1 00:18:22.593 } 00:18:22.593 Got JSON-RPC error response 00:18:22.593 response: 00:18:22.593 { 00:18:22.593 "code": -5, 00:18:22.593 "message": "Input/output error" 00:18:22.593 } 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2752862 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2752862 ']' 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2752862 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2752862 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2752862' 00:18:22.593 killing process with pid 2752862 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2752862 00:18:22.593 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.593 00:18:22.593 Latency(us) 00:18:22.593 [2024-11-20T15:11:23.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.593 [2024-11-20T15:11:23.430Z] =================================================================================================================== 00:18:22.593 [2024-11-20T15:11:23.430Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.593 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2752862 00:18:22.853 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYSdpV8dyp 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYSdpV8dyp 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYSdpV8dyp 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qYSdpV8dyp 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2753018 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2753018 /var/tmp/bdevperf.sock 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2753018 ']' 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.854 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 [2024-11-20 16:11:23.529389] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:22.854 [2024-11-20 16:11:23.529444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753018 ] 00:18:22.854 [2024-11-20 16:11:23.596014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.854 [2024-11-20 16:11:23.634279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.113 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.113 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.113 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qYSdpV8dyp 00:18:23.113 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:23.377 [2024-11-20 16:11:24.102201] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.377 [2024-11-20 16:11:24.106984] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:23.377 [2024-11-20 16:11:24.107005] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:23.378 [2024-11-20 16:11:24.107045] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:23.378 [2024-11-20 16:11:24.107682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac170 (107): Transport endpoint is not connected 00:18:23.378 [2024-11-20 16:11:24.108675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac170 (9): Bad file descriptor 00:18:23.378 [2024-11-20 16:11:24.109676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:23.378 [2024-11-20 16:11:24.109686] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:23.378 [2024-11-20 16:11:24.109693] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:23.378 [2024-11-20 16:11:24.109704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:23.378 request: 00:18:23.378 { 00:18:23.378 "name": "TLSTEST", 00:18:23.378 "trtype": "tcp", 00:18:23.378 "traddr": "10.0.0.2", 00:18:23.378 "adrfam": "ipv4", 00:18:23.378 "trsvcid": "4420", 00:18:23.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.378 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:23.378 "prchk_reftag": false, 00:18:23.378 "prchk_guard": false, 00:18:23.378 "hdgst": false, 00:18:23.378 "ddgst": false, 00:18:23.378 "psk": "key0", 00:18:23.378 "allow_unrecognized_csi": false, 00:18:23.378 "method": "bdev_nvme_attach_controller", 00:18:23.378 "req_id": 1 00:18:23.378 } 00:18:23.378 Got JSON-RPC error response 00:18:23.378 response: 00:18:23.378 { 00:18:23.378 "code": -5, 00:18:23.378 "message": "Input/output error" 00:18:23.378 } 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2753018 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2753018 ']' 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2753018 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753018 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753018' 00:18:23.378 killing process with pid 2753018 00:18:23.378 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2753018 00:18:23.378 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.378 00:18:23.378 Latency(us) 00:18:23.378 [2024-11-20T15:11:24.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.379 [2024-11-20T15:11:24.216Z] =================================================================================================================== 00:18:23.379 [2024-11-20T15:11:24.216Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:23.379 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2753018 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYSdpV8dyp 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYSdpV8dyp 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYSdpV8dyp 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qYSdpV8dyp 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2753248 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2753248 /var/tmp/bdevperf.sock 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2753248 ']' 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.641 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.642 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.642 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.642 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 [2024-11-20 16:11:24.394302] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:23.642 [2024-11-20 16:11:24.394349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753248 ] 00:18:23.642 [2024-11-20 16:11:24.464896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.900 [2024-11-20 16:11:24.503855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.900 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.900 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.900 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qYSdpV8dyp 00:18:24.160 16:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.160 [2024-11-20 16:11:24.968280] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.160 [2024-11-20 16:11:24.976802] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:24.160 [2024-11-20 16:11:24.976822] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:24.160 [2024-11-20 16:11:24.976845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:24.160 [2024-11-20 16:11:24.977713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665170 (107): Transport endpoint is not connected 00:18:24.160 [2024-11-20 16:11:24.978707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665170 (9): Bad file descriptor 00:18:24.160 [2024-11-20 16:11:24.979708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:24.160 [2024-11-20 16:11:24.979717] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:24.160 [2024-11-20 16:11:24.979725] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:24.160 [2024-11-20 16:11:24.979735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:24.160 request: 00:18:24.160 { 00:18:24.160 "name": "TLSTEST", 00:18:24.160 "trtype": "tcp", 00:18:24.160 "traddr": "10.0.0.2", 00:18:24.160 "adrfam": "ipv4", 00:18:24.160 "trsvcid": "4420", 00:18:24.160 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:24.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.160 "prchk_reftag": false, 00:18:24.160 "prchk_guard": false, 00:18:24.160 "hdgst": false, 00:18:24.160 "ddgst": false, 00:18:24.160 "psk": "key0", 00:18:24.160 "allow_unrecognized_csi": false, 00:18:24.160 "method": "bdev_nvme_attach_controller", 00:18:24.160 "req_id": 1 00:18:24.160 } 00:18:24.160 Got JSON-RPC error response 00:18:24.160 response: 00:18:24.160 { 00:18:24.160 "code": -5, 00:18:24.160 "message": "Input/output error" 00:18:24.160 } 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2753248 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2753248 ']' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2753248 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753248 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753248' 00:18:24.420 killing process with pid 2753248 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2753248 00:18:24.420 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.420 00:18:24.420 Latency(us) 00:18:24.420 [2024-11-20T15:11:25.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.420 [2024-11-20T15:11:25.257Z] =================================================================================================================== 00:18:24.420 [2024-11-20T15:11:25.257Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2753248 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2753266 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2753266 /var/tmp/bdevperf.sock 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2753266 ']' 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.420 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.679 [2024-11-20 16:11:25.262183] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:24.679 [2024-11-20 16:11:25.262236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753266 ] 00:18:24.679 [2024-11-20 16:11:25.338360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.679 [2024-11-20 16:11:25.378523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.679 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.679 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.679 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:24.939 [2024-11-20 16:11:25.653732] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:24.939 [2024-11-20 16:11:25.653760] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:24.939 request: 00:18:24.939 { 00:18:24.939 "name": "key0", 00:18:24.939 "path": "", 00:18:24.939 "method": "keyring_file_add_key", 00:18:24.939 "req_id": 1 00:18:24.939 } 00:18:24.939 Got JSON-RPC error response 00:18:24.939 response: 00:18:24.939 { 00:18:24.939 "code": -1, 00:18:24.939 "message": "Operation not permitted" 00:18:24.939 } 00:18:24.939 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.198 [2024-11-20 16:11:25.850331] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.198 [2024-11-20 16:11:25.850360] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:25.198 request: 00:18:25.198 { 00:18:25.198 "name": "TLSTEST", 00:18:25.198 "trtype": "tcp", 00:18:25.198 "traddr": "10.0.0.2", 00:18:25.198 "adrfam": "ipv4", 00:18:25.198 "trsvcid": "4420", 00:18:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.198 "prchk_reftag": false, 00:18:25.198 "prchk_guard": false, 00:18:25.198 "hdgst": false, 00:18:25.198 "ddgst": false, 00:18:25.198 "psk": "key0", 00:18:25.198 "allow_unrecognized_csi": false, 00:18:25.198 "method": "bdev_nvme_attach_controller", 00:18:25.198 "req_id": 1 00:18:25.198 } 00:18:25.198 Got JSON-RPC error response 00:18:25.198 response: 00:18:25.198 { 00:18:25.198 "code": -126, 00:18:25.198 "message": "Required key not available" 00:18:25.198 } 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2753266 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2753266 ']' 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2753266 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753266 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753266' 00:18:25.198 killing process with pid 2753266 00:18:25.198 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2753266 00:18:25.198 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.198 00:18:25.199 Latency(us) 00:18:25.199 [2024-11-20T15:11:26.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.199 [2024-11-20T15:11:26.036Z] =================================================================================================================== 00:18:25.199 [2024-11-20T15:11:26.036Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.199 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2753266 00:18:25.458 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:25.458 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:25.458 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.458 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.458 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.458 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2748708 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2748708 ']' 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2748708 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2748708 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2748708' 00:18:25.459 killing process with pid 2748708 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2748708 00:18:25.459 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2748708 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.GlzvcAtcUL 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.GlzvcAtcUL 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2753513 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2753513 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2753513 ']' 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.718 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.718 [2024-11-20 16:11:26.388754] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:25.718 [2024-11-20 16:11:26.388801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.718 [2024-11-20 16:11:26.466053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.718 [2024-11-20 16:11:26.502117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.718 [2024-11-20 16:11:26.502152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.718 [2024-11-20 16:11:26.502159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.718 [2024-11-20 16:11:26.502166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.718 [2024-11-20 16:11:26.502171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.718 [2024-11-20 16:11:26.502720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.GlzvcAtcUL 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GlzvcAtcUL 00:18:25.989 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.989 [2024-11-20 16:11:26.819036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.248 16:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.248 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.507 [2024-11-20 16:11:27.187972] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.507 [2024-11-20 16:11:27.188196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.507 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:26.766 malloc0 00:18:26.766 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.766 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:27.025 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GlzvcAtcUL 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GlzvcAtcUL 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2753770 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2753770 /var/tmp/bdevperf.sock 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2753770 ']' 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.284 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.284 [2024-11-20 16:11:28.006295] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:27.284 [2024-11-20 16:11:28.006343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753770 ] 00:18:27.284 [2024-11-20 16:11:28.082280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.543 [2024-11-20 16:11:28.123632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.543 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.543 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.543 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:27.802 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.803 [2024-11-20 16:11:28.576288] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.061 TLSTESTn1 00:18:28.061 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:28.061 Running I/O for 10 seconds... 00:18:29.934 5368.00 IOPS, 20.97 MiB/s [2024-11-20T15:11:32.148Z] 5390.50 IOPS, 21.06 MiB/s [2024-11-20T15:11:33.084Z] 5421.00 IOPS, 21.18 MiB/s [2024-11-20T15:11:34.021Z] 5441.75 IOPS, 21.26 MiB/s [2024-11-20T15:11:34.956Z] 5437.20 IOPS, 21.24 MiB/s [2024-11-20T15:11:35.894Z] 5411.17 IOPS, 21.14 MiB/s [2024-11-20T15:11:36.829Z] 5362.00 IOPS, 20.95 MiB/s [2024-11-20T15:11:38.207Z] 5316.50 IOPS, 20.77 MiB/s [2024-11-20T15:11:39.144Z] 5269.67 IOPS, 20.58 MiB/s [2024-11-20T15:11:39.144Z] 5238.70 IOPS, 20.46 MiB/s 00:18:38.307 Latency(us) 00:18:38.307 [2024-11-20T15:11:39.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.307 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.307 Verification LBA range: start 0x0 length 0x2000 00:18:38.307 TLSTESTn1 : 10.02 5240.23 20.47 0.00 0.00 24385.01 5014.93 30773.43 00:18:38.307 [2024-11-20T15:11:39.144Z] =================================================================================================================== 00:18:38.307 [2024-11-20T15:11:39.144Z] Total : 5240.23 20.47 0.00 0.00 24385.01 5014.93 30773.43 00:18:38.307 { 00:18:38.307 "results": [ 00:18:38.307 { 00:18:38.307 "job": "TLSTESTn1", 00:18:38.307 "core_mask": "0x4", 00:18:38.307 "workload": "verify", 00:18:38.307 "status": "finished", 00:18:38.307 "verify_range": { 00:18:38.307 "start": 0, 00:18:38.307 "length": 8192 00:18:38.307 }, 00:18:38.307 "queue_depth": 128, 00:18:38.307 "io_size": 4096, 00:18:38.307 "runtime": 10.02131, 00:18:38.307 "iops": 5240.233063342018, 00:18:38.307 "mibps": 20.469660403679757, 00:18:38.307 "io_failed": 0, 00:18:38.307 "io_timeout": 0, 00:18:38.307 "avg_latency_us": 24385.011997595673, 00:18:38.307 "min_latency_us": 5014.928695652174, 00:18:38.307 "max_latency_us": 30773.426086956522 00:18:38.308 } 00:18:38.308 ], 00:18:38.308 "core_count": 1 00:18:38.308 } 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2753770 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2753770 ']' 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2753770 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753770 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753770' 00:18:38.308 killing process with pid 2753770 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2753770 00:18:38.308 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.308 00:18:38.308 Latency(us) 00:18:38.308 [2024-11-20T15:11:39.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.308 [2024-11-20T15:11:39.145Z] =================================================================================================================== 00:18:38.308 [2024-11-20T15:11:39.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.308 16:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2753770 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.GlzvcAtcUL 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GlzvcAtcUL 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GlzvcAtcUL 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GlzvcAtcUL 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GlzvcAtcUL 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2755604 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2755604 /var/tmp/bdevperf.sock 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2755604 ']' 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.308 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.308 [2024-11-20 16:11:39.084066] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:38.308 [2024-11-20 16:11:39.084116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755604 ] 00:18:38.566 [2024-11-20 16:11:39.145257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.566 [2024-11-20 16:11:39.184162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.566 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.566 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.566 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:38.906 [2024-11-20 16:11:39.460014] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GlzvcAtcUL': 0100666 00:18:38.906 [2024-11-20 16:11:39.460047] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:38.906 request: 00:18:38.906 { 00:18:38.906 "name": "key0", 00:18:38.906 "path": "/tmp/tmp.GlzvcAtcUL", 00:18:38.906 "method": "keyring_file_add_key", 00:18:38.906 "req_id": 1 00:18:38.906 } 00:18:38.906 Got JSON-RPC error response 00:18:38.906 response: 00:18:38.906 { 00:18:38.906 "code": -1, 00:18:38.906 "message": "Operation not permitted" 00:18:38.906 } 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.906 [2024-11-20 16:11:39.668649] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.906 [2024-11-20 16:11:39.668688] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:38.906 request: 00:18:38.906 { 00:18:38.906 "name": "TLSTEST", 00:18:38.906 "trtype": "tcp", 00:18:38.906 "traddr": "10.0.0.2", 00:18:38.906 "adrfam": "ipv4", 00:18:38.906 "trsvcid": "4420", 00:18:38.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.906 "prchk_reftag": false, 00:18:38.906 "prchk_guard": false, 00:18:38.906 "hdgst": false, 00:18:38.906 "ddgst": false, 00:18:38.906 "psk": "key0", 00:18:38.906 "allow_unrecognized_csi": false, 00:18:38.906 "method": "bdev_nvme_attach_controller", 00:18:38.906 "req_id": 1 00:18:38.906 } 00:18:38.906 Got JSON-RPC error response 00:18:38.906 response: 00:18:38.906 { 00:18:38.906 "code": -126, 00:18:38.906 "message": "Required key not available" 00:18:38.906 } 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2755604 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2755604 ']' 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2755604 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.906 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755604 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755604' 00:18:39.218 killing process with pid 2755604 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2755604 00:18:39.218 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.218 00:18:39.218 Latency(us) 00:18:39.218 [2024-11-20T15:11:40.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.218 [2024-11-20T15:11:40.055Z] =================================================================================================================== 00:18:39.218 [2024-11-20T15:11:40.055Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2755604 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2753513 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2753513 ']' 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2753513 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753513 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753513' 00:18:39.218 killing process with pid 2753513 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2753513 00:18:39.218 16:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2753513 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2755850 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2755850 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2755850 ']' 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.477 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.477 [2024-11-20 16:11:40.169243] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:39.477 [2024-11-20 16:11:40.169293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.477 [2024-11-20 16:11:40.247200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.477 [2024-11-20 16:11:40.288570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.477 [2024-11-20 16:11:40.288608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.477 [2024-11-20 16:11:40.288615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.477 [2024-11-20 16:11:40.288621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.477 [2024-11-20 16:11:40.288627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.477 [2024-11-20 16:11:40.289200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.GlzvcAtcUL 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GlzvcAtcUL 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.GlzvcAtcUL 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GlzvcAtcUL 00:18:39.736 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.994 [2024-11-20 16:11:40.599209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.994 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.251 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.251 [2024-11-20 16:11:41.000242] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.251 [2024-11-20 16:11:41.000424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.251 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.510 malloc0 00:18:40.510 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.768 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:41.027 [2024-11-20 16:11:41.609869] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GlzvcAtcUL': 0100666 00:18:41.027 [2024-11-20 16:11:41.609893] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:41.027 request: 00:18:41.027 { 00:18:41.027 "name": "key0", 00:18:41.027 "path": "/tmp/tmp.GlzvcAtcUL", 00:18:41.027 "method": "keyring_file_add_key", 00:18:41.027 "req_id": 1 00:18:41.027 } 00:18:41.027 Got JSON-RPC error response 00:18:41.027 response: 00:18:41.027 { 00:18:41.027 "code": -1, 00:18:41.027 "message": "Operation not permitted" 00:18:41.027 } 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.027 [2024-11-20 16:11:41.806409] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:41.027 [2024-11-20 16:11:41.806446] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:41.027 request: 00:18:41.027 { 00:18:41.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.027 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.027 "psk": "key0", 00:18:41.027 "method": "nvmf_subsystem_add_host", 00:18:41.027 "req_id": 1 00:18:41.027 } 00:18:41.027 Got JSON-RPC error response 00:18:41.027 response: 00:18:41.027 { 00:18:41.027 "code": -32603, 00:18:41.027 "message": "Internal error" 00:18:41.027 } 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2755850 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2755850 ']' 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2755850 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.027 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755850 00:18:41.285 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.286 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.286 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755850' 00:18:41.286 killing process with pid 2755850 00:18:41.286 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2755850 00:18:41.286 16:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2755850 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.GlzvcAtcUL 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2756122 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2756122 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2756122 ']' 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.286 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.286 [2024-11-20 16:11:42.097301] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:41.286 [2024-11-20 16:11:42.097350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.545 [2024-11-20 16:11:42.177694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.545 [2024-11-20 16:11:42.213172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.545 [2024-11-20 16:11:42.213208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.545 [2024-11-20 16:11:42.213215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.545 [2024-11-20 16:11:42.213220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.545 [2024-11-20 16:11:42.213226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.545 [2024-11-20 16:11:42.213780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.GlzvcAtcUL 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GlzvcAtcUL 00:18:41.545 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.804 [2024-11-20 16:11:42.534928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.804 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:42.062 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:42.321 [2024-11-20 16:11:42.935969] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.321 [2024-11-20 16:11:42.936177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.321 16:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.321 malloc0 00:18:42.579 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:42.579 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:42.837 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2756402 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2756402 /var/tmp/bdevperf.sock 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2756402 ']' 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.096 16:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.096 [2024-11-20 16:11:43.799849] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:43.096 [2024-11-20 16:11:43.799902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756402 ] 00:18:43.096 [2024-11-20 16:11:43.874620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.096 [2024-11-20 16:11:43.915699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.354 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.354 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.354 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:43.612 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.612 [2024-11-20 16:11:44.384489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.871 TLSTESTn1 00:18:43.871 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:44.129 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:44.129 "subsystems": [ 00:18:44.129 { 00:18:44.129 "subsystem": "keyring", 00:18:44.129 "config": [ 00:18:44.129 { 00:18:44.129 "method": "keyring_file_add_key", 00:18:44.129 "params": { 00:18:44.129 "name": "key0", 00:18:44.129 "path": "/tmp/tmp.GlzvcAtcUL" 00:18:44.129 } 00:18:44.129 } 00:18:44.129 ] 00:18:44.129 }, 00:18:44.129 { 00:18:44.129 "subsystem": "iobuf", 00:18:44.129 "config": [ 00:18:44.129 { 00:18:44.129 "method": "iobuf_set_options", 00:18:44.129 "params": { 00:18:44.129 "small_pool_count": 8192, 00:18:44.129 "large_pool_count": 1024, 00:18:44.129 "small_bufsize": 8192, 00:18:44.129 "large_bufsize": 135168, 00:18:44.129 "enable_numa": false 00:18:44.129 } 00:18:44.129 } 00:18:44.129 ] 00:18:44.129 }, 00:18:44.129 { 00:18:44.129 "subsystem": "sock", 00:18:44.129 "config": [ 00:18:44.129 { 00:18:44.129 "method": "sock_set_default_impl", 00:18:44.129 "params": { 00:18:44.129 "impl_name": "posix" 00:18:44.129 } 00:18:44.129 }, 00:18:44.129 { 00:18:44.129 "method": "sock_impl_set_options", 00:18:44.130 "params": { 00:18:44.130 "impl_name": "ssl", 00:18:44.130 "recv_buf_size": 4096, 00:18:44.130 "send_buf_size": 4096, 00:18:44.130 "enable_recv_pipe": true, 00:18:44.130 "enable_quickack": false, 00:18:44.130 "enable_placement_id": 0, 00:18:44.130 "enable_zerocopy_send_server": true, 00:18:44.130 "enable_zerocopy_send_client": false, 00:18:44.130 "zerocopy_threshold": 0, 00:18:44.130 "tls_version": 0, 00:18:44.130 "enable_ktls": false 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "sock_impl_set_options", 00:18:44.130 "params": { 00:18:44.130 "impl_name": "posix", 00:18:44.130 "recv_buf_size": 2097152, 00:18:44.130 "send_buf_size": 2097152, 00:18:44.130 "enable_recv_pipe": true, 00:18:44.130 "enable_quickack": false, 00:18:44.130 "enable_placement_id": 0, 00:18:44.130 "enable_zerocopy_send_server": true, 00:18:44.130 "enable_zerocopy_send_client": false, 00:18:44.130 "zerocopy_threshold": 0, 00:18:44.130 "tls_version": 0, 00:18:44.130 "enable_ktls": false 00:18:44.130 } 00:18:44.130 } 00:18:44.130 ] 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "subsystem": "vmd", 00:18:44.130 "config": [] 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "subsystem": "accel", 00:18:44.130 "config": [ 00:18:44.130 { 00:18:44.130 "method": "accel_set_options", 00:18:44.130 "params": { 00:18:44.130 "small_cache_size": 128, 00:18:44.130 "large_cache_size": 16, 00:18:44.130 "task_count": 2048, 00:18:44.130 "sequence_count": 2048, 00:18:44.130 "buf_count": 2048 00:18:44.130 } 00:18:44.130 } 00:18:44.130 ] 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "subsystem": "bdev", 00:18:44.130 "config": [ 00:18:44.130 { 00:18:44.130 "method": "bdev_set_options", 00:18:44.130 "params": { 00:18:44.130 "bdev_io_pool_size": 65535, 00:18:44.130 "bdev_io_cache_size": 256, 00:18:44.130 "bdev_auto_examine": true, 00:18:44.130 "iobuf_small_cache_size": 128, 00:18:44.130 "iobuf_large_cache_size": 16 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "bdev_raid_set_options", 00:18:44.130 "params": { 00:18:44.130 "process_window_size_kb": 1024, 00:18:44.130 "process_max_bandwidth_mb_sec": 0 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "bdev_iscsi_set_options", 00:18:44.130 "params": { 00:18:44.130 "timeout_sec": 30 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "bdev_nvme_set_options", 00:18:44.130 "params": { 00:18:44.130 "action_on_timeout": "none", 00:18:44.130 "timeout_us": 0, 00:18:44.130 "timeout_admin_us": 0, 00:18:44.130 "keep_alive_timeout_ms": 10000, 00:18:44.130 "arbitration_burst": 0, 00:18:44.130 "low_priority_weight": 0, 00:18:44.130 "medium_priority_weight": 0, 00:18:44.130 "high_priority_weight": 0, 00:18:44.130 "nvme_adminq_poll_period_us": 10000, 00:18:44.130 "nvme_ioq_poll_period_us": 0, 00:18:44.130 "io_queue_requests": 0, 00:18:44.130 "delay_cmd_submit": true, 00:18:44.130 "transport_retry_count": 4, 00:18:44.130 "bdev_retry_count": 3, 00:18:44.130 "transport_ack_timeout": 0, 00:18:44.130 "ctrlr_loss_timeout_sec": 0, 00:18:44.130 "reconnect_delay_sec": 0, 00:18:44.130 "fast_io_fail_timeout_sec": 0, 00:18:44.130 "disable_auto_failback": false, 00:18:44.130 "generate_uuids": false, 00:18:44.130 "transport_tos": 0, 00:18:44.130 "nvme_error_stat": false, 00:18:44.130 "rdma_srq_size": 0, 00:18:44.130 "io_path_stat": false, 00:18:44.130 "allow_accel_sequence": false, 00:18:44.130 "rdma_max_cq_size": 0, 00:18:44.130 "rdma_cm_event_timeout_ms": 0, 00:18:44.130 "dhchap_digests": [ 00:18:44.130 "sha256", 00:18:44.130 "sha384", 00:18:44.130 "sha512" 00:18:44.130 ], 00:18:44.130 "dhchap_dhgroups": [ 00:18:44.130 "null", 00:18:44.130 "ffdhe2048", 00:18:44.130 "ffdhe3072", 00:18:44.130 "ffdhe4096", 00:18:44.130 "ffdhe6144", 00:18:44.130 "ffdhe8192" 00:18:44.130 ] 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "bdev_nvme_set_hotplug", 00:18:44.130 "params": { 00:18:44.130 "period_us": 100000, 00:18:44.130 "enable": false 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "bdev_malloc_create", 00:18:44.130 "params": { 00:18:44.130 "name": "malloc0", 00:18:44.130 "num_blocks": 8192, 00:18:44.130 "block_size": 4096, 00:18:44.130 "physical_block_size": 4096, 00:18:44.130 "uuid": "812c9376-6373-4131-9c44-3c8a062acdc4", 00:18:44.130 "optimal_io_boundary": 0, 00:18:44.130 "md_size": 0, 00:18:44.130 "dif_type": 0, 00:18:44.130 "dif_is_head_of_md": false, 00:18:44.130 "dif_pi_format": 0 00:18:44.130 } 00:18:44.130 }, 00:18:44.130 { 00:18:44.130 "method": "bdev_wait_for_examine" 00:18:44.130 } 00:18:44.131 ] 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "subsystem": "nbd", 00:18:44.131 "config": [] 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "subsystem": "scheduler", 00:18:44.131 "config": [ 00:18:44.131 { 00:18:44.131 "method": "framework_set_scheduler", 00:18:44.131 "params": { 00:18:44.131 "name": "static" 00:18:44.131 } 00:18:44.131 } 00:18:44.131 ] 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "subsystem": "nvmf", 00:18:44.131 "config": [ 00:18:44.131 { 00:18:44.131 "method": "nvmf_set_config", 00:18:44.131 "params": { 00:18:44.131 "discovery_filter": "match_any", 00:18:44.131 "admin_cmd_passthru": { 00:18:44.131 "identify_ctrlr": false 00:18:44.131 }, 00:18:44.131 "dhchap_digests": [ 00:18:44.131 "sha256", 00:18:44.131 "sha384", 00:18:44.131 "sha512" 00:18:44.131 ], 00:18:44.131 "dhchap_dhgroups": [ 00:18:44.131 "null", 00:18:44.131 "ffdhe2048", 00:18:44.131 "ffdhe3072", 00:18:44.131 "ffdhe4096", 00:18:44.131 "ffdhe6144", 00:18:44.131 "ffdhe8192" 00:18:44.131 ] 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_set_max_subsystems", 00:18:44.131 "params": { 00:18:44.131 "max_subsystems": 1024 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_set_crdt", 00:18:44.131 "params": { 00:18:44.131 "crdt1": 0, 00:18:44.131 "crdt2": 0, 00:18:44.131 "crdt3": 0 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_create_transport", 00:18:44.131 "params": { 00:18:44.131 "trtype": "TCP", 00:18:44.131 "max_queue_depth": 128, 00:18:44.131 "max_io_qpairs_per_ctrlr": 127, 00:18:44.131 "in_capsule_data_size": 4096, 00:18:44.131 "max_io_size": 131072, 00:18:44.131 "io_unit_size": 131072, 00:18:44.131 "max_aq_depth": 128, 00:18:44.131 "num_shared_buffers": 511, 00:18:44.131 "buf_cache_size": 4294967295, 00:18:44.131 "dif_insert_or_strip": false, 00:18:44.131 "zcopy": false, 00:18:44.131 "c2h_success": false, 00:18:44.131 "sock_priority": 0, 00:18:44.131 "abort_timeout_sec": 1, 00:18:44.131 "ack_timeout": 0, 00:18:44.131 "data_wr_pool_size": 0 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_create_subsystem", 00:18:44.131 "params": { 00:18:44.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.131 "allow_any_host": false, 00:18:44.131 "serial_number": "SPDK00000000000001", 00:18:44.131 "model_number": "SPDK bdev Controller", 00:18:44.131 "max_namespaces": 10, 00:18:44.131 "min_cntlid": 1, 00:18:44.131 "max_cntlid": 65519, 00:18:44.131 "ana_reporting": false 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_subsystem_add_host", 00:18:44.131 "params": { 00:18:44.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.131 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.131 "psk": "key0" 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_subsystem_add_ns", 00:18:44.131 "params": { 00:18:44.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.131 "namespace": { 00:18:44.131 "nsid": 1, 00:18:44.131 "bdev_name": "malloc0", 00:18:44.131 "nguid": "812C9376637341319C443C8A062ACDC4", 00:18:44.131 "uuid": "812c9376-6373-4131-9c44-3c8a062acdc4", 00:18:44.131 "no_auto_visible": false 00:18:44.131 } 00:18:44.131 } 00:18:44.131 }, 00:18:44.131 { 00:18:44.131 "method": "nvmf_subsystem_add_listener", 00:18:44.131 "params": { 00:18:44.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.131 "listen_address": { 00:18:44.131 "trtype": "TCP", 00:18:44.131 "adrfam": "IPv4", 00:18:44.131 "traddr": "10.0.0.2", 00:18:44.131 "trsvcid": "4420" 00:18:44.131 }, 00:18:44.131 "secure_channel": true 00:18:44.131 } 00:18:44.131 } 00:18:44.131 ] 00:18:44.131 } 00:18:44.131 ] 00:18:44.131 }' 00:18:44.131 16:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:44.389 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:44.389 "subsystems": [ 00:18:44.389 { 00:18:44.389 "subsystem": "keyring", 00:18:44.389 "config": [ 00:18:44.389 { 00:18:44.389 "method": "keyring_file_add_key", 00:18:44.389 "params": { 00:18:44.389 "name": "key0", 00:18:44.389 "path": "/tmp/tmp.GlzvcAtcUL" 00:18:44.389 } 00:18:44.389 } 00:18:44.389 ] 00:18:44.389 }, 00:18:44.389 { 00:18:44.389 "subsystem": "iobuf", 00:18:44.389 "config": [ 00:18:44.389 { 00:18:44.389 "method": "iobuf_set_options", 00:18:44.389 "params": { 00:18:44.389 "small_pool_count": 8192, 00:18:44.389 "large_pool_count": 1024, 00:18:44.389 "small_bufsize": 8192, 00:18:44.389 "large_bufsize": 135168, 00:18:44.389 "enable_numa": false 00:18:44.389 } 00:18:44.389 } 00:18:44.389 ] 00:18:44.389 }, 00:18:44.389 { 00:18:44.389 "subsystem": "sock", 00:18:44.390 "config": [ 00:18:44.390 { 00:18:44.390 "method": "sock_set_default_impl", 00:18:44.390 "params": { 00:18:44.390 "impl_name": "posix" 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "sock_impl_set_options", 00:18:44.390 "params": { 00:18:44.390 "impl_name": "ssl", 00:18:44.390 "recv_buf_size": 4096, 00:18:44.390 "send_buf_size": 4096, 00:18:44.390 "enable_recv_pipe": true, 00:18:44.390 "enable_quickack": false, 00:18:44.390 "enable_placement_id": 0, 00:18:44.390 "enable_zerocopy_send_server": true, 00:18:44.390 "enable_zerocopy_send_client": false, 00:18:44.390 "zerocopy_threshold": 0, 00:18:44.390 "tls_version": 0, 00:18:44.390 "enable_ktls": false 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "sock_impl_set_options", 00:18:44.390 "params": { 00:18:44.390 "impl_name": "posix", 00:18:44.390 "recv_buf_size": 2097152, 00:18:44.390 "send_buf_size": 2097152, 00:18:44.390 "enable_recv_pipe": true, 00:18:44.390 "enable_quickack": false, 00:18:44.390 "enable_placement_id": 0, 00:18:44.390 "enable_zerocopy_send_server": true, 00:18:44.390 "enable_zerocopy_send_client": false, 00:18:44.390 "zerocopy_threshold": 0, 00:18:44.390 "tls_version": 0, 00:18:44.390 "enable_ktls": false 00:18:44.390 } 00:18:44.390 } 00:18:44.390 ] 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "subsystem": "vmd", 00:18:44.390 "config": [] 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "subsystem": "accel", 00:18:44.390 "config": [ 00:18:44.390 { 00:18:44.390 "method": "accel_set_options", 00:18:44.390 "params": { 00:18:44.390 "small_cache_size": 128, 00:18:44.390 "large_cache_size": 16, 00:18:44.390 "task_count": 2048, 00:18:44.390 "sequence_count": 2048, 00:18:44.390 "buf_count": 2048 00:18:44.390 } 00:18:44.390 } 00:18:44.390 ] 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "subsystem": "bdev", 00:18:44.390 "config": [ 00:18:44.390 { 00:18:44.390 "method": "bdev_set_options", 00:18:44.390 "params": { 00:18:44.390 "bdev_io_pool_size": 65535, 00:18:44.390 "bdev_io_cache_size": 256, 00:18:44.390 "bdev_auto_examine": true, 00:18:44.390 "iobuf_small_cache_size": 128, 00:18:44.390 "iobuf_large_cache_size": 16 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "bdev_raid_set_options", 00:18:44.390 "params": { 00:18:44.390 "process_window_size_kb": 1024, 00:18:44.390 "process_max_bandwidth_mb_sec": 0 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "bdev_iscsi_set_options", 00:18:44.390 "params": { 00:18:44.390 "timeout_sec": 30 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "bdev_nvme_set_options", 00:18:44.390 "params": { 00:18:44.390 "action_on_timeout": "none", 00:18:44.390 "timeout_us": 0, 00:18:44.390 "timeout_admin_us": 0, 00:18:44.390 "keep_alive_timeout_ms": 10000, 00:18:44.390 "arbitration_burst": 0, 00:18:44.390 "low_priority_weight": 0, 00:18:44.390 "medium_priority_weight": 0, 00:18:44.390 "high_priority_weight": 0, 00:18:44.390 "nvme_adminq_poll_period_us": 10000, 00:18:44.390 "nvme_ioq_poll_period_us": 0, 00:18:44.390 "io_queue_requests": 512, 00:18:44.390 "delay_cmd_submit": true, 00:18:44.390 "transport_retry_count": 4, 00:18:44.390 "bdev_retry_count": 3, 00:18:44.390 "transport_ack_timeout": 0, 00:18:44.390 "ctrlr_loss_timeout_sec": 0, 00:18:44.390 "reconnect_delay_sec": 0, 00:18:44.390 "fast_io_fail_timeout_sec": 0, 00:18:44.390 "disable_auto_failback": false, 00:18:44.390 "generate_uuids": false, 00:18:44.390 "transport_tos": 0, 00:18:44.390 "nvme_error_stat": false, 00:18:44.390 "rdma_srq_size": 0, 00:18:44.390 "io_path_stat": false, 00:18:44.390 "allow_accel_sequence": false, 00:18:44.390 "rdma_max_cq_size": 0, 00:18:44.390 "rdma_cm_event_timeout_ms": 0, 00:18:44.390 "dhchap_digests": [ 00:18:44.390 "sha256", 00:18:44.390 "sha384", 00:18:44.390 "sha512" 00:18:44.390 ], 00:18:44.390 "dhchap_dhgroups": [ 00:18:44.390 "null", 00:18:44.390 "ffdhe2048", 00:18:44.390 "ffdhe3072", 00:18:44.390 "ffdhe4096", 00:18:44.390 "ffdhe6144", 00:18:44.390 "ffdhe8192" 00:18:44.390 ] 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "bdev_nvme_attach_controller", 00:18:44.390 "params": { 00:18:44.390 "name": "TLSTEST", 00:18:44.390 "trtype": "TCP", 00:18:44.390 "adrfam": "IPv4", 00:18:44.390 "traddr": "10.0.0.2", 00:18:44.390 "trsvcid": "4420", 00:18:44.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.390 "prchk_reftag": false, 00:18:44.390 "prchk_guard": false, 00:18:44.390 "ctrlr_loss_timeout_sec": 0, 00:18:44.390 "reconnect_delay_sec": 0, 00:18:44.390 "fast_io_fail_timeout_sec": 0, 00:18:44.390 "psk": "key0", 00:18:44.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.390 "hdgst": false, 00:18:44.390 "ddgst": false, 00:18:44.390 "multipath": "multipath" 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "bdev_nvme_set_hotplug", 00:18:44.390 "params": { 00:18:44.390 "period_us": 100000, 00:18:44.390 "enable": false 00:18:44.390 } 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "method": "bdev_wait_for_examine" 00:18:44.390 } 00:18:44.390 ] 00:18:44.390 }, 00:18:44.390 { 00:18:44.390 "subsystem": "nbd", 00:18:44.390 "config": [] 00:18:44.390 } 00:18:44.390 ] 00:18:44.390 }' 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2756402 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2756402 ']' 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2756402 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756402 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756402' 00:18:44.390 killing process with pid 2756402 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2756402 00:18:44.390 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.390 00:18:44.390 Latency(us) 00:18:44.390 [2024-11-20T15:11:45.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.390 [2024-11-20T15:11:45.227Z] =================================================================================================================== 00:18:44.390 [2024-11-20T15:11:45.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.390 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2756402 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2756122 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2756122 ']' 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2756122 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756122 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756122' 00:18:44.650 killing process with pid 2756122 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2756122 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2756122 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.650 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:44.650 "subsystems": [ 00:18:44.650 { 00:18:44.650 "subsystem": "keyring", 00:18:44.650 "config": [ 00:18:44.650 { 00:18:44.650 "method": "keyring_file_add_key", 00:18:44.650 "params": { 00:18:44.650 "name": "key0", 00:18:44.650 "path": "/tmp/tmp.GlzvcAtcUL" 00:18:44.650 } 00:18:44.650 } 00:18:44.650 ] 00:18:44.650 }, 00:18:44.650 { 00:18:44.650 "subsystem": "iobuf", 00:18:44.650 "config": [ 00:18:44.650 { 00:18:44.650 "method": "iobuf_set_options", 00:18:44.650 "params": { 00:18:44.650 "small_pool_count": 8192, 00:18:44.650 "large_pool_count": 1024, 00:18:44.650 "small_bufsize": 8192, 00:18:44.650 "large_bufsize": 135168, 00:18:44.650 "enable_numa": false 00:18:44.650 } 00:18:44.650 } 00:18:44.650 ] 00:18:44.650 }, 00:18:44.650 { 00:18:44.650 "subsystem": "sock", 00:18:44.650 "config": [ 00:18:44.650 { 00:18:44.650 "method": "sock_set_default_impl", 00:18:44.650 "params": { 00:18:44.650 "impl_name": "posix" 00:18:44.650 } 00:18:44.650 }, 00:18:44.650 { 00:18:44.650 "method": "sock_impl_set_options", 00:18:44.650 "params": { 00:18:44.650 "impl_name": "ssl", 00:18:44.650 "recv_buf_size": 4096, 00:18:44.650 "send_buf_size": 4096, 00:18:44.650 "enable_recv_pipe": true, 00:18:44.650 "enable_quickack": false, 00:18:44.650 "enable_placement_id": 0, 00:18:44.650 "enable_zerocopy_send_server": true, 00:18:44.651 "enable_zerocopy_send_client": false, 00:18:44.651 "zerocopy_threshold": 0, 00:18:44.651 "tls_version": 0, 00:18:44.651 "enable_ktls": false 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "sock_impl_set_options", 00:18:44.651 "params": { 00:18:44.651 "impl_name": "posix", 00:18:44.651 "recv_buf_size": 2097152, 00:18:44.651 "send_buf_size": 2097152, 00:18:44.651 "enable_recv_pipe": true, 00:18:44.651 "enable_quickack": false, 00:18:44.651 "enable_placement_id": 0, 00:18:44.651 "enable_zerocopy_send_server": true, 00:18:44.651 "enable_zerocopy_send_client": false, 00:18:44.651 "zerocopy_threshold": 0, 00:18:44.651 "tls_version": 0, 00:18:44.651 "enable_ktls": false 00:18:44.651 } 00:18:44.651 } 00:18:44.651 ] 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "subsystem": "vmd", 00:18:44.651 "config": [] 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "subsystem": "accel", 00:18:44.651 "config": [ 00:18:44.651 { 00:18:44.651 "method": "accel_set_options", 00:18:44.651 "params": { 00:18:44.651 "small_cache_size": 128, 00:18:44.651 "large_cache_size": 16, 00:18:44.651 "task_count": 2048, 00:18:44.651 "sequence_count": 2048, 00:18:44.651 "buf_count": 2048 00:18:44.651 } 00:18:44.651 } 00:18:44.651 ] 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "subsystem": "bdev", 00:18:44.651 "config": [ 00:18:44.651 { 00:18:44.651 "method": "bdev_set_options", 00:18:44.651 "params": { 00:18:44.651 "bdev_io_pool_size": 65535, 00:18:44.651 "bdev_io_cache_size": 256, 00:18:44.651 "bdev_auto_examine": true, 00:18:44.651 "iobuf_small_cache_size": 128, 00:18:44.651 "iobuf_large_cache_size": 16 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "bdev_raid_set_options", 00:18:44.651 "params": { 00:18:44.651 "process_window_size_kb": 1024, 00:18:44.651 "process_max_bandwidth_mb_sec": 0 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "bdev_iscsi_set_options", 00:18:44.651 "params": { 00:18:44.651 "timeout_sec": 30 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "bdev_nvme_set_options", 00:18:44.651 "params": { 00:18:44.651 "action_on_timeout": "none", 00:18:44.651 "timeout_us": 0, 00:18:44.651 "timeout_admin_us": 0, 00:18:44.651 "keep_alive_timeout_ms": 10000, 00:18:44.651 "arbitration_burst": 0, 00:18:44.651 "low_priority_weight": 0, 00:18:44.651 "medium_priority_weight": 0, 00:18:44.651 "high_priority_weight": 0, 00:18:44.651 "nvme_adminq_poll_period_us": 10000, 00:18:44.651 "nvme_ioq_poll_period_us": 0, 00:18:44.651 "io_queue_requests": 0, 00:18:44.651 "delay_cmd_submit": true, 00:18:44.651 "transport_retry_count": 4, 00:18:44.651 "bdev_retry_count": 3, 00:18:44.651 "transport_ack_timeout": 0, 00:18:44.651 "ctrlr_loss_timeout_sec": 0, 00:18:44.651 "reconnect_delay_sec": 0, 00:18:44.651 "fast_io_fail_timeout_sec": 0, 00:18:44.651 "disable_auto_failback": false, 00:18:44.651 "generate_uuids": false, 00:18:44.651 "transport_tos": 0, 00:18:44.651 "nvme_error_stat": false, 00:18:44.651 "rdma_srq_size": 0, 00:18:44.651 "io_path_stat": false, 00:18:44.651 "allow_accel_sequence": false, 00:18:44.651 "rdma_max_cq_size": 0, 00:18:44.651 "rdma_cm_event_timeout_ms": 0, 00:18:44.651 "dhchap_digests": [ 00:18:44.651 "sha256", 00:18:44.651 "sha384", 00:18:44.651 "sha512" 00:18:44.651 ], 00:18:44.651 "dhchap_dhgroups": [ 00:18:44.651 "null", 00:18:44.651 "ffdhe2048", 00:18:44.651 "ffdhe3072", 00:18:44.651 "ffdhe4096", 00:18:44.651 "ffdhe6144", 00:18:44.651 "ffdhe8192" 00:18:44.651 ] 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "bdev_nvme_set_hotplug", 00:18:44.651 "params": { 00:18:44.651 "period_us": 100000, 00:18:44.651 "enable": false 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "bdev_malloc_create", 00:18:44.651 "params": { 00:18:44.651 "name": "malloc0", 00:18:44.651 "num_blocks": 8192, 00:18:44.651 "block_size": 4096, 00:18:44.651 "physical_block_size": 4096, 00:18:44.651 "uuid": "812c9376-6373-4131-9c44-3c8a062acdc4", 00:18:44.651 "optimal_io_boundary": 0, 00:18:44.651 "md_size": 0, 00:18:44.651 "dif_type": 0, 00:18:44.651 "dif_is_head_of_md": false, 00:18:44.651 "dif_pi_format": 0 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "bdev_wait_for_examine" 00:18:44.651 } 00:18:44.651 ] 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "subsystem": "nbd", 00:18:44.651 "config": [] 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "subsystem": "scheduler", 00:18:44.651 "config": [ 00:18:44.651 { 00:18:44.651 "method": "framework_set_scheduler", 00:18:44.651 "params": { 00:18:44.651 "name": "static" 00:18:44.651 } 00:18:44.651 } 00:18:44.651 ] 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "subsystem": "nvmf", 00:18:44.651 "config": [ 00:18:44.651 { 00:18:44.651 "method": "nvmf_set_config", 00:18:44.651 "params": { 00:18:44.651 "discovery_filter": "match_any", 00:18:44.651 "admin_cmd_passthru": { 00:18:44.651 "identify_ctrlr": false 00:18:44.651 }, 00:18:44.651 "dhchap_digests": [ 00:18:44.651 "sha256", 00:18:44.651 "sha384", 00:18:44.651 "sha512" 00:18:44.651 ], 00:18:44.651 "dhchap_dhgroups": [ 00:18:44.651 "null", 00:18:44.651 "ffdhe2048", 00:18:44.651 "ffdhe3072", 00:18:44.651 "ffdhe4096", 00:18:44.651 "ffdhe6144", 00:18:44.651 "ffdhe8192" 00:18:44.651 ] 00:18:44.651 } 00:18:44.651 }, 00:18:44.651 { 00:18:44.651 "method": "nvmf_set_max_subsystems", 00:18:44.651 "params": { 00:18:44.651 "max_subsystems": 1024 00:18:44.651 } 00:18:44.651 }, 00:18:44.652 { 00:18:44.652 "method": "nvmf_set_crdt", 00:18:44.652 "params": { 00:18:44.652 "crdt1": 0, 00:18:44.652 "crdt2": 0, 00:18:44.652 "crdt3": 0 00:18:44.652 } 00:18:44.652 }, 00:18:44.652 { 00:18:44.652 "method": "nvmf_create_transport", 00:18:44.652 "params": { 00:18:44.652 "trtype": "TCP", 00:18:44.652 "max_queue_depth": 128, 00:18:44.652 "max_io_qpairs_per_ctrlr": 127, 00:18:44.652 "in_capsule_data_size": 4096, 00:18:44.652 "max_io_size": 131072, 00:18:44.652 "io_unit_size": 131072, 00:18:44.652 "max_aq_depth": 128, 00:18:44.652 "num_shared_buffers": 511, 00:18:44.652 "buf_cache_size": 4294967295, 00:18:44.652 "dif_insert_or_strip": false, 00:18:44.652 "zcopy": false, 00:18:44.652 "c2h_success": false, 00:18:44.652 "sock_priority": 0, 00:18:44.652 "abort_timeout_sec": 1, 00:18:44.652 "ack_timeout": 0, 00:18:44.652 "data_wr_pool_size": 0 00:18:44.652 } 00:18:44.652 }, 00:18:44.652 { 00:18:44.652 "method": "nvmf_create_subsystem", 00:18:44.652 "params": { 00:18:44.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.652 "allow_any_host": false, 00:18:44.652 "serial_number": "SPDK00000000000001", 00:18:44.652 "model_number": "SPDK bdev Controller", 00:18:44.652 "max_namespaces": 10, 00:18:44.652 "min_cntlid": 1, 00:18:44.652 "max_cntlid": 65519, 00:18:44.652 "ana_reporting": false 00:18:44.652 } 00:18:44.652 }, 00:18:44.652 { 00:18:44.652 "method": "nvmf_subsystem_add_host", 00:18:44.652 "params": { 00:18:44.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.652 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.652 "psk": "key0" 00:18:44.652 } 00:18:44.652 }, 00:18:44.652 { 00:18:44.652 "method": "nvmf_subsystem_add_ns", 00:18:44.652 "params": { 00:18:44.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.652 "namespace": { 00:18:44.652 "nsid": 1, 00:18:44.652 "bdev_name": "malloc0", 00:18:44.652 "nguid": "812C9376637341319C443C8A062ACDC4", 00:18:44.652 "uuid": "812c9376-6373-4131-9c44-3c8a062acdc4", 00:18:44.652 "no_auto_visible": false 00:18:44.652 } 00:18:44.652 } 00:18:44.652 }, 00:18:44.652 { 00:18:44.652 "method": "nvmf_subsystem_add_listener", 00:18:44.652 "params": { 00:18:44.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.652 "listen_address": { 00:18:44.652 "trtype": "TCP", 00:18:44.652 "adrfam": "IPv4", 00:18:44.652 "traddr": "10.0.0.2", 00:18:44.652 "trsvcid": "4420" 00:18:44.652 }, 00:18:44.652 "secure_channel": true 00:18:44.652 } 00:18:44.652 } 00:18:44.652 ] 00:18:44.652 } 00:18:44.652 ] 00:18:44.652 }' 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2756828 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2756828 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2756828 ']' 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.652 16:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.910 [2024-11-20 16:11:45.512289] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:44.910 [2024-11-20 16:11:45.512334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.910 [2024-11-20 16:11:45.590876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.910 [2024-11-20 16:11:45.631670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.910 [2024-11-20 16:11:45.631706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.910 [2024-11-20 16:11:45.631713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.910 [2024-11-20 16:11:45.631719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.910 [2024-11-20 16:11:45.631724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.910 [2024-11-20 16:11:45.632318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.169 [2024-11-20 16:11:45.846372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.169 [2024-11-20 16:11:45.878401] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.169 [2024-11-20 16:11:45.878593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2756873 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2756873 /var/tmp/bdevperf.sock 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2756873 ']' 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.736 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:45.736 "subsystems": [ 00:18:45.736 { 00:18:45.736 "subsystem": "keyring", 00:18:45.736 "config": [ 00:18:45.736 { 00:18:45.736 "method": "keyring_file_add_key", 00:18:45.736 "params": { 00:18:45.736 "name": "key0", 00:18:45.736 "path": "/tmp/tmp.GlzvcAtcUL" 00:18:45.736 } 00:18:45.736 } 00:18:45.736 ] 00:18:45.736 }, 00:18:45.736 { 00:18:45.736 "subsystem": "iobuf", 00:18:45.736 "config": [ 00:18:45.736 { 00:18:45.736 "method": "iobuf_set_options", 00:18:45.736 "params": { 00:18:45.736 "small_pool_count": 8192, 00:18:45.736 "large_pool_count": 1024, 00:18:45.736 "small_bufsize": 8192, 00:18:45.736 "large_bufsize": 135168, 00:18:45.736 "enable_numa": false 00:18:45.736 } 00:18:45.736 } 00:18:45.736 ] 00:18:45.736 }, 00:18:45.736 { 00:18:45.736 "subsystem": "sock", 00:18:45.736 "config": [ 00:18:45.736 { 00:18:45.736 "method": "sock_set_default_impl", 00:18:45.736 "params": { 00:18:45.736 "impl_name": "posix" 00:18:45.736 } 00:18:45.736 }, 00:18:45.736 { 00:18:45.737 "method": "sock_impl_set_options", 00:18:45.737 "params": { 00:18:45.737 "impl_name": "ssl", 00:18:45.737 "recv_buf_size": 4096, 00:18:45.737 "send_buf_size": 4096, 00:18:45.737 "enable_recv_pipe": true, 00:18:45.737 "enable_quickack": false, 00:18:45.737 "enable_placement_id": 0, 00:18:45.737 "enable_zerocopy_send_server": true, 00:18:45.737 "enable_zerocopy_send_client": false, 00:18:45.737 "zerocopy_threshold": 0, 00:18:45.737 "tls_version": 0, 00:18:45.737 "enable_ktls": false 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "sock_impl_set_options", 00:18:45.737 "params": { 00:18:45.737 "impl_name": "posix", 00:18:45.737 "recv_buf_size": 2097152, 00:18:45.737 "send_buf_size": 2097152, 00:18:45.737 "enable_recv_pipe": true, 00:18:45.737 "enable_quickack": false, 00:18:45.737 "enable_placement_id": 0, 00:18:45.737 "enable_zerocopy_send_server": true, 00:18:45.737 "enable_zerocopy_send_client": false, 00:18:45.737 "zerocopy_threshold": 0, 00:18:45.737 "tls_version": 0, 00:18:45.737 "enable_ktls": false 00:18:45.737 } 00:18:45.737 } 00:18:45.737 ] 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "subsystem": "vmd", 00:18:45.737 "config": [] 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "subsystem": "accel", 00:18:45.737 "config": [ 00:18:45.737 { 00:18:45.737 "method": "accel_set_options", 00:18:45.737 "params": { 00:18:45.737 "small_cache_size": 128, 00:18:45.737 "large_cache_size": 16, 00:18:45.737 "task_count": 2048, 00:18:45.737 "sequence_count": 2048, 00:18:45.737 "buf_count": 2048 00:18:45.737 } 00:18:45.737 } 00:18:45.737 ] 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "subsystem": "bdev", 00:18:45.737 "config": [ 00:18:45.737 { 00:18:45.737 "method": "bdev_set_options", 00:18:45.737 "params": { 00:18:45.737 "bdev_io_pool_size": 65535, 00:18:45.737 "bdev_io_cache_size": 256, 00:18:45.737 "bdev_auto_examine": true, 00:18:45.737 "iobuf_small_cache_size": 128, 00:18:45.737 "iobuf_large_cache_size": 16 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "bdev_raid_set_options", 00:18:45.737 "params": { 00:18:45.737 "process_window_size_kb": 1024, 00:18:45.737 "process_max_bandwidth_mb_sec": 0 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "bdev_iscsi_set_options", 00:18:45.737 "params": { 00:18:45.737 "timeout_sec": 30 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "bdev_nvme_set_options", 00:18:45.737 "params": { 00:18:45.737 "action_on_timeout": "none", 00:18:45.737 "timeout_us": 0, 00:18:45.737 "timeout_admin_us": 0, 00:18:45.737 "keep_alive_timeout_ms": 10000, 00:18:45.737 "arbitration_burst": 0, 00:18:45.737 "low_priority_weight": 0, 00:18:45.737 "medium_priority_weight": 0, 00:18:45.737 "high_priority_weight": 0, 00:18:45.737 "nvme_adminq_poll_period_us": 10000, 00:18:45.737 "nvme_ioq_poll_period_us": 0, 00:18:45.737 "io_queue_requests": 512, 00:18:45.737 "delay_cmd_submit": true, 00:18:45.737 "transport_retry_count": 4, 00:18:45.737 "bdev_retry_count": 3, 00:18:45.737 "transport_ack_timeout": 0, 00:18:45.737 "ctrlr_loss_timeout_sec": 0, 00:18:45.737 "reconnect_delay_sec": 0, 00:18:45.737 "fast_io_fail_timeout_sec": 0, 00:18:45.737 "disable_auto_failback": false, 00:18:45.737 "generate_uuids": false, 00:18:45.737 "transport_tos": 0, 00:18:45.737 "nvme_error_stat": false, 00:18:45.737 "rdma_srq_size": 0, 00:18:45.737 "io_path_stat": false, 00:18:45.737 "allow_accel_sequence": false, 00:18:45.737 "rdma_max_cq_size": 0, 00:18:45.737 "rdma_cm_event_timeout_ms": 0, 00:18:45.737 "dhchap_digests": [ 00:18:45.737 "sha256", 00:18:45.737 "sha384", 00:18:45.737 "sha512" 00:18:45.737 ], 00:18:45.737 "dhchap_dhgroups": [ 00:18:45.737 "null", 00:18:45.737 "ffdhe2048", 00:18:45.737 "ffdhe3072", 00:18:45.737 "ffdhe4096", 00:18:45.737 "ffdhe6144", 00:18:45.737 "ffdhe8192" 00:18:45.737 ] 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "bdev_nvme_attach_controller", 00:18:45.737 "params": { 00:18:45.737 "name": "TLSTEST", 00:18:45.737 "trtype": "TCP", 00:18:45.737 "adrfam": "IPv4", 00:18:45.737 "traddr": "10.0.0.2", 00:18:45.737 "trsvcid": "4420", 00:18:45.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.737 "prchk_reftag": false, 00:18:45.737 "prchk_guard": false, 00:18:45.737 "ctrlr_loss_timeout_sec": 0, 00:18:45.737 "reconnect_delay_sec": 0, 00:18:45.737 "fast_io_fail_timeout_sec": 0, 00:18:45.737 "psk": "key0", 00:18:45.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.737 "hdgst": false, 00:18:45.737 "ddgst": false, 00:18:45.737 "multipath": "multipath" 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "bdev_nvme_set_hotplug", 00:18:45.737 "params": { 00:18:45.737 "period_us": 100000, 00:18:45.737 "enable": false 00:18:45.737 } 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "method": "bdev_wait_for_examine" 00:18:45.737 } 00:18:45.737 ] 00:18:45.737 }, 00:18:45.737 { 00:18:45.737 "subsystem": "nbd", 00:18:45.737 "config": [] 00:18:45.737 } 00:18:45.737 ] 00:18:45.737 }' 00:18:45.738 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.738 16:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.738 [2024-11-20 16:11:46.442324] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:45.738 [2024-11-20 16:11:46.442370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756873 ] 00:18:45.738 [2024-11-20 16:11:46.518257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.738 [2024-11-20 16:11:46.559999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.996 [2024-11-20 16:11:46.713266] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.562 16:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.562 16:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.562 16:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.562 Running I/O for 10 seconds... 00:18:48.895 5199.00 IOPS, 20.31 MiB/s [2024-11-20T15:11:50.668Z] 5372.50 IOPS, 20.99 MiB/s [2024-11-20T15:11:51.605Z] 5379.33 IOPS, 21.01 MiB/s [2024-11-20T15:11:52.540Z] 5378.50 IOPS, 21.01 MiB/s [2024-11-20T15:11:53.477Z] 5407.40 IOPS, 21.12 MiB/s [2024-11-20T15:11:54.416Z] 5387.50 IOPS, 21.04 MiB/s [2024-11-20T15:11:55.798Z] 5386.43 IOPS, 21.04 MiB/s [2024-11-20T15:11:56.735Z] 5387.62 IOPS, 21.05 MiB/s [2024-11-20T15:11:57.674Z] 5396.22 IOPS, 21.08 MiB/s [2024-11-20T15:11:57.674Z] 5311.20 IOPS, 20.75 MiB/s 00:18:56.837 Latency(us) 00:18:56.837 [2024-11-20T15:11:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.837 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.837 Verification LBA range: start 0x0 length 0x2000 00:18:56.837 TLSTESTn1 : 10.02 5314.34 20.76 0.00 0.00 24048.89 5185.89 28607.89 00:18:56.837 [2024-11-20T15:11:57.674Z] =================================================================================================================== 00:18:56.837 [2024-11-20T15:11:57.674Z] Total : 5314.34 20.76 0.00 0.00 24048.89 5185.89 28607.89 00:18:56.837 { 00:18:56.837 "results": [ 00:18:56.837 { 00:18:56.837 "job": "TLSTESTn1", 00:18:56.837 "core_mask": "0x4", 00:18:56.837 "workload": "verify", 00:18:56.837 "status": "finished", 00:18:56.837 "verify_range": { 00:18:56.837 "start": 0, 00:18:56.837 "length": 8192 00:18:56.837 }, 00:18:56.837 "queue_depth": 128, 00:18:56.837 "io_size": 4096, 00:18:56.837 "runtime": 10.01781, 00:18:56.837 "iops": 5314.335169063897, 00:18:56.837 "mibps": 20.75912175415585, 00:18:56.837 "io_failed": 0, 00:18:56.837 "io_timeout": 0, 00:18:56.837 "avg_latency_us": 24048.892313270844, 00:18:56.837 "min_latency_us": 5185.892173913044, 00:18:56.837 "max_latency_us": 28607.888695652175 00:18:56.837 } 00:18:56.837 ], 00:18:56.837 "core_count": 1 00:18:56.837 } 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2756873 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2756873 ']' 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2756873 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756873 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756873' 00:18:56.837 killing process with pid 2756873 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2756873 00:18:56.837 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.837 00:18:56.837 Latency(us) 00:18:56.837 [2024-11-20T15:11:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.837 [2024-11-20T15:11:57.674Z] =================================================================================================================== 00:18:56.837 [2024-11-20T15:11:57.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2756873 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2756828 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2756828 ']' 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2756828 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.837 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756828 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756828' 00:18:57.097 killing process with pid 2756828 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2756828 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2756828 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2758730 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2758730 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2758730 ']' 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.097 16:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.097 [2024-11-20 16:11:57.916126] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:57.097 [2024-11-20 16:11:57.916172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.357 [2024-11-20 16:11:57.993256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.357 [2024-11-20 16:11:58.034285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.357 [2024-11-20 16:11:58.034325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.357 [2024-11-20 16:11:58.034332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.357 [2024-11-20 16:11:58.034338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.357 [2024-11-20 16:11:58.034343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.357 [2024-11-20 16:11:58.034925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.GlzvcAtcUL 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GlzvcAtcUL 00:18:57.357 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.616 [2024-11-20 16:11:58.348181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.616 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.875 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:58.133 [2024-11-20 16:11:58.729176] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.133 [2024-11-20 16:11:58.729394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.133 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:58.133 malloc0 00:18:58.133 16:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.393 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2759022 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2759022 /var/tmp/bdevperf.sock 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2759022 ']' 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.652 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.911 [2024-11-20 16:11:59.515330] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:58.911 [2024-11-20 16:11:59.515382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759022 ] 00:18:58.911 [2024-11-20 16:11:59.589597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.911 [2024-11-20 16:11:59.632654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.911 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.911 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.911 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:18:59.171 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:59.430 [2024-11-20 16:12:00.081514] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.430 nvme0n1 00:18:59.431 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.431 Running I/O for 1 seconds... 00:19:00.811 5351.00 IOPS, 20.90 MiB/s 00:19:00.811 Latency(us) 00:19:00.811 [2024-11-20T15:12:01.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.811 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.811 Verification LBA range: start 0x0 length 0x2000 00:19:00.812 nvme0n1 : 1.01 5406.54 21.12 0.00 0.00 23510.41 5527.82 25872.47 00:19:00.812 [2024-11-20T15:12:01.649Z] =================================================================================================================== 00:19:00.812 [2024-11-20T15:12:01.649Z] Total : 5406.54 21.12 0.00 0.00 23510.41 5527.82 25872.47 00:19:00.812 { 00:19:00.812 "results": [ 00:19:00.812 { 00:19:00.812 "job": "nvme0n1", 00:19:00.812 "core_mask": "0x2", 00:19:00.812 "workload": "verify", 00:19:00.812 "status": "finished", 00:19:00.812 "verify_range": { 00:19:00.812 "start": 0, 00:19:00.812 "length": 8192 00:19:00.812 }, 00:19:00.812 "queue_depth": 128, 00:19:00.812 "io_size": 4096, 00:19:00.812 "runtime": 1.013402, 00:19:00.812 "iops": 5406.541530409452, 00:19:00.812 "mibps": 21.119302853161923, 00:19:00.812 "io_failed": 0, 00:19:00.812 "io_timeout": 0, 00:19:00.812 "avg_latency_us": 23510.41138132157, 00:19:00.812 "min_latency_us": 5527.819130434783, 00:19:00.812 "max_latency_us": 25872.47304347826 00:19:00.812 } 00:19:00.812 ], 00:19:00.812 "core_count": 1 00:19:00.812 } 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2759022 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2759022 ']' 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2759022 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759022 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759022' 00:19:00.812 killing process with pid 2759022 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2759022 00:19:00.812 Received shutdown signal, test time was about 1.000000 seconds 00:19:00.812 00:19:00.812 Latency(us) 00:19:00.812 [2024-11-20T15:12:01.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.812 [2024-11-20T15:12:01.649Z] =================================================================================================================== 00:19:00.812 [2024-11-20T15:12:01.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2759022 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2758730 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2758730 ']' 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2758730 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2758730 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2758730' 00:19:00.812 killing process with pid 2758730 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2758730 00:19:00.812 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2758730 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2759565 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2759565 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2759565 ']' 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.071 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.071 [2024-11-20 16:12:01.789550] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:01.071 [2024-11-20 16:12:01.789597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.071 [2024-11-20 16:12:01.868270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.330 [2024-11-20 16:12:01.909104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.330 [2024-11-20 16:12:01.909141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.330 [2024-11-20 16:12:01.909149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.330 [2024-11-20 16:12:01.909155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.330 [2024-11-20 16:12:01.909160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.330 [2024-11-20 16:12:01.909733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.330 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.331 [2024-11-20 16:12:02.045652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.331 malloc0 00:19:01.331 [2024-11-20 16:12:02.073827] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.331 [2024-11-20 16:12:02.074033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2759603 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2759603 /var/tmp/bdevperf.sock 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2759603 ']' 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.331 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.331 [2024-11-20 16:12:02.150180] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:01.331 [2024-11-20 16:12:02.150221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759603 ] 00:19:01.590 [2024-11-20 16:12:02.225924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.590 [2024-11-20 16:12:02.269328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.590 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.590 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.590 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GlzvcAtcUL 00:19:01.849 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.108 [2024-11-20 16:12:02.738814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.108 nvme0n1 00:19:02.108 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.108 Running I/O for 1 seconds... 00:19:03.485 5360.00 IOPS, 20.94 MiB/s 00:19:03.485 Latency(us) 00:19:03.485 [2024-11-20T15:12:04.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.485 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.485 Verification LBA range: start 0x0 length 0x2000 00:19:03.485 nvme0n1 : 1.02 5388.87 21.05 0.00 0.00 23555.29 6411.13 20059.71 00:19:03.485 [2024-11-20T15:12:04.322Z] =================================================================================================================== 00:19:03.485 [2024-11-20T15:12:04.322Z] Total : 5388.87 21.05 0.00 0.00 23555.29 6411.13 20059.71 00:19:03.485 { 00:19:03.485 "results": [ 00:19:03.485 { 00:19:03.485 "job": "nvme0n1", 00:19:03.485 "core_mask": "0x2", 00:19:03.485 "workload": "verify", 00:19:03.485 "status": "finished", 00:19:03.485 "verify_range": { 00:19:03.485 "start": 0, 00:19:03.485 "length": 8192 00:19:03.485 }, 00:19:03.485 "queue_depth": 128, 00:19:03.485 "io_size": 4096, 00:19:03.485 "runtime": 1.018581, 00:19:03.485 "iops": 5388.869417356106, 00:19:03.485 "mibps": 21.05027116154729, 00:19:03.485 "io_failed": 0, 00:19:03.485 "io_timeout": 0, 00:19:03.485 "avg_latency_us": 23555.29342447741, 00:19:03.485 "min_latency_us": 6411.130434782609, 00:19:03.485 "max_latency_us": 20059.714782608695 00:19:03.485 } 00:19:03.485 ], 00:19:03.485 "core_count": 1 00:19:03.485 } 00:19:03.485 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:03.485 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.485 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.485 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.485 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:03.485 "subsystems": [ 00:19:03.485 { 00:19:03.485 "subsystem": "keyring", 00:19:03.485 "config": [ 00:19:03.485 { 00:19:03.485 "method": "keyring_file_add_key", 00:19:03.485 "params": { 00:19:03.485 "name": "key0", 00:19:03.485 "path": "/tmp/tmp.GlzvcAtcUL" 00:19:03.485 } 00:19:03.485 } 00:19:03.485 ] 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "subsystem": "iobuf", 00:19:03.485 "config": [ 00:19:03.485 { 00:19:03.485 "method": "iobuf_set_options", 00:19:03.485 "params": { 00:19:03.485 "small_pool_count": 8192, 00:19:03.485 "large_pool_count": 1024, 00:19:03.485 "small_bufsize": 8192, 00:19:03.485 "large_bufsize": 135168, 00:19:03.485 "enable_numa": false 00:19:03.485 } 00:19:03.485 } 00:19:03.485 ] 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "subsystem": "sock", 00:19:03.485 "config": [ 00:19:03.485 { 00:19:03.485 "method": "sock_set_default_impl", 00:19:03.485 "params": { 00:19:03.485 "impl_name": "posix" 00:19:03.485 } 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "method": "sock_impl_set_options", 00:19:03.485 "params": { 00:19:03.485 "impl_name": "ssl", 00:19:03.485 "recv_buf_size": 4096, 00:19:03.485 "send_buf_size": 4096, 00:19:03.485 "enable_recv_pipe": true, 00:19:03.485 "enable_quickack": false, 00:19:03.485 "enable_placement_id": 0, 00:19:03.485 "enable_zerocopy_send_server": true, 00:19:03.485 "enable_zerocopy_send_client": false, 00:19:03.485 "zerocopy_threshold": 0, 00:19:03.485 "tls_version": 0, 00:19:03.485 "enable_ktls": false 00:19:03.485 } 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "method": "sock_impl_set_options", 00:19:03.485 "params": { 00:19:03.485 "impl_name": "posix", 00:19:03.485 "recv_buf_size": 2097152, 00:19:03.485 "send_buf_size": 2097152, 00:19:03.485 "enable_recv_pipe": true, 00:19:03.485 "enable_quickack": false, 00:19:03.485 "enable_placement_id": 0, 00:19:03.485 "enable_zerocopy_send_server": true, 00:19:03.485 "enable_zerocopy_send_client": false, 00:19:03.485 "zerocopy_threshold": 0, 00:19:03.485 "tls_version": 0, 00:19:03.485 "enable_ktls": false 00:19:03.485 } 00:19:03.485 } 00:19:03.485 ] 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "subsystem": "vmd", 00:19:03.485 "config": [] 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "subsystem": "accel", 00:19:03.485 "config": [ 00:19:03.485 { 00:19:03.485 "method": "accel_set_options", 00:19:03.485 "params": { 00:19:03.485 "small_cache_size": 128, 00:19:03.485 "large_cache_size": 16, 00:19:03.485 "task_count": 2048, 00:19:03.485 "sequence_count": 2048, 00:19:03.485 "buf_count": 2048 00:19:03.485 } 00:19:03.485 } 00:19:03.485 ] 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "subsystem": "bdev", 00:19:03.485 "config": [ 00:19:03.485 { 00:19:03.485 "method": "bdev_set_options", 00:19:03.485 "params": { 00:19:03.485 "bdev_io_pool_size": 65535, 00:19:03.485 "bdev_io_cache_size": 256, 00:19:03.485 "bdev_auto_examine": true, 00:19:03.485 "iobuf_small_cache_size": 128, 00:19:03.485 "iobuf_large_cache_size": 16 00:19:03.485 } 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "method": "bdev_raid_set_options", 00:19:03.485 "params": { 00:19:03.486 "process_window_size_kb": 1024, 00:19:03.486 "process_max_bandwidth_mb_sec": 0 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "bdev_iscsi_set_options", 00:19:03.486 "params": { 00:19:03.486 "timeout_sec": 30 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "bdev_nvme_set_options", 00:19:03.486 "params": { 00:19:03.486 "action_on_timeout": "none", 00:19:03.486 "timeout_us": 0, 00:19:03.486 "timeout_admin_us": 0, 00:19:03.486 "keep_alive_timeout_ms": 10000, 00:19:03.486 "arbitration_burst": 0, 00:19:03.486 "low_priority_weight": 0, 00:19:03.486 "medium_priority_weight": 0, 00:19:03.486 "high_priority_weight": 0, 00:19:03.486 "nvme_adminq_poll_period_us": 10000, 00:19:03.486 "nvme_ioq_poll_period_us": 0, 00:19:03.486 "io_queue_requests": 0, 00:19:03.486 "delay_cmd_submit": true, 00:19:03.486 "transport_retry_count": 4, 00:19:03.486 "bdev_retry_count": 3, 00:19:03.486 "transport_ack_timeout": 0, 00:19:03.486 "ctrlr_loss_timeout_sec": 0, 00:19:03.486 "reconnect_delay_sec": 0, 00:19:03.486 "fast_io_fail_timeout_sec": 0, 00:19:03.486 "disable_auto_failback": false, 00:19:03.486 "generate_uuids": false, 00:19:03.486 "transport_tos": 0, 00:19:03.486 "nvme_error_stat": false, 00:19:03.486 "rdma_srq_size": 0, 00:19:03.486 "io_path_stat": false, 00:19:03.486 "allow_accel_sequence": false, 00:19:03.486 "rdma_max_cq_size": 0, 00:19:03.486 "rdma_cm_event_timeout_ms": 0, 00:19:03.486 "dhchap_digests": [ 00:19:03.486 "sha256", 00:19:03.486 "sha384", 00:19:03.486 "sha512" 00:19:03.486 ], 00:19:03.486 "dhchap_dhgroups": [ 00:19:03.486 "null", 00:19:03.486 "ffdhe2048", 00:19:03.486 "ffdhe3072", 00:19:03.486 "ffdhe4096", 00:19:03.486 "ffdhe6144", 00:19:03.486 "ffdhe8192" 00:19:03.486 ] 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "bdev_nvme_set_hotplug", 00:19:03.486 "params": { 00:19:03.486 "period_us": 100000, 00:19:03.486 "enable": false 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "bdev_malloc_create", 00:19:03.486 "params": { 00:19:03.486 "name": "malloc0", 00:19:03.486 "num_blocks": 8192, 00:19:03.486 "block_size": 4096, 00:19:03.486 "physical_block_size": 4096, 00:19:03.486 "uuid": "f887d5bf-1fe2-4302-9174-29f397b979c3", 00:19:03.486 "optimal_io_boundary": 0, 00:19:03.486 "md_size": 0, 00:19:03.486 "dif_type": 0, 00:19:03.486 "dif_is_head_of_md": false, 00:19:03.486 "dif_pi_format": 0 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "bdev_wait_for_examine" 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "subsystem": "nbd", 00:19:03.486 "config": [] 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "subsystem": "scheduler", 00:19:03.486 "config": [ 00:19:03.486 { 00:19:03.486 "method": "framework_set_scheduler", 00:19:03.486 "params": { 00:19:03.486 "name": "static" 00:19:03.486 } 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "subsystem": "nvmf", 00:19:03.486 "config": [ 00:19:03.486 { 00:19:03.486 "method": "nvmf_set_config", 00:19:03.486 "params": { 00:19:03.486 "discovery_filter": "match_any", 00:19:03.486 "admin_cmd_passthru": { 00:19:03.486 "identify_ctrlr": false 00:19:03.486 }, 00:19:03.486 "dhchap_digests": [ 00:19:03.486 "sha256", 00:19:03.486 "sha384", 00:19:03.486 "sha512" 00:19:03.486 ], 00:19:03.486 "dhchap_dhgroups": [ 00:19:03.486 "null", 00:19:03.486 "ffdhe2048", 00:19:03.486 "ffdhe3072", 00:19:03.486 "ffdhe4096", 00:19:03.486 "ffdhe6144", 00:19:03.486 "ffdhe8192" 00:19:03.486 ] 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_set_max_subsystems", 00:19:03.486 "params": { 00:19:03.486 "max_subsystems": 1024 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_set_crdt", 00:19:03.486 "params": { 00:19:03.486 "crdt1": 0, 00:19:03.486 "crdt2": 0, 00:19:03.486 "crdt3": 0 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_create_transport", 00:19:03.486 "params": { 00:19:03.486 "trtype": "TCP", 00:19:03.486 "max_queue_depth": 128, 00:19:03.486 "max_io_qpairs_per_ctrlr": 127, 00:19:03.486 "in_capsule_data_size": 4096, 00:19:03.486 "max_io_size": 131072, 00:19:03.486 "io_unit_size": 131072, 00:19:03.486 "max_aq_depth": 128, 00:19:03.486 "num_shared_buffers": 511, 00:19:03.486 "buf_cache_size": 4294967295, 00:19:03.486 "dif_insert_or_strip": false, 00:19:03.486 "zcopy": false, 00:19:03.486 "c2h_success": false, 00:19:03.486 "sock_priority": 0, 00:19:03.486 "abort_timeout_sec": 1, 00:19:03.486 "ack_timeout": 0, 00:19:03.486 "data_wr_pool_size": 0 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_create_subsystem", 00:19:03.486 "params": { 00:19:03.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.486 "allow_any_host": false, 00:19:03.486 "serial_number": "00000000000000000000", 00:19:03.486 "model_number": "SPDK bdev Controller", 00:19:03.486 "max_namespaces": 32, 00:19:03.486 "min_cntlid": 1, 00:19:03.486 "max_cntlid": 65519, 00:19:03.486 "ana_reporting": false 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_subsystem_add_host", 00:19:03.486 "params": { 00:19:03.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.486 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.486 "psk": "key0" 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_subsystem_add_ns", 00:19:03.486 "params": { 00:19:03.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.486 "namespace": { 00:19:03.486 "nsid": 1, 00:19:03.486 "bdev_name": "malloc0", 00:19:03.486 "nguid": "F887D5BF1FE24302917429F397B979C3", 00:19:03.486 "uuid": "f887d5bf-1fe2-4302-9174-29f397b979c3", 00:19:03.486 "no_auto_visible": false 00:19:03.486 } 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "nvmf_subsystem_add_listener", 00:19:03.486 "params": { 00:19:03.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.486 "listen_address": { 00:19:03.486 "trtype": "TCP", 00:19:03.486 "adrfam": "IPv4", 00:19:03.486 "traddr": "10.0.0.2", 00:19:03.486 "trsvcid": "4420" 00:19:03.486 }, 00:19:03.486 "secure_channel": false, 00:19:03.486 "sock_impl": "ssl" 00:19:03.486 } 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 }' 00:19:03.486 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:03.486 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:03.486 "subsystems": [ 00:19:03.486 { 00:19:03.486 "subsystem": "keyring", 00:19:03.486 "config": [ 00:19:03.486 { 00:19:03.486 "method": "keyring_file_add_key", 00:19:03.486 "params": { 00:19:03.486 "name": "key0", 00:19:03.486 "path": "/tmp/tmp.GlzvcAtcUL" 00:19:03.486 } 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "subsystem": "iobuf", 00:19:03.486 "config": [ 00:19:03.486 { 00:19:03.486 "method": "iobuf_set_options", 00:19:03.486 "params": { 00:19:03.486 "small_pool_count": 8192, 00:19:03.486 "large_pool_count": 1024, 00:19:03.486 "small_bufsize": 8192, 00:19:03.486 "large_bufsize": 135168, 00:19:03.486 "enable_numa": false 00:19:03.486 } 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "subsystem": "sock", 00:19:03.486 "config": [ 00:19:03.486 { 00:19:03.486 "method": "sock_set_default_impl", 00:19:03.486 "params": { 00:19:03.486 "impl_name": "posix" 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "sock_impl_set_options", 00:19:03.486 "params": { 00:19:03.486 "impl_name": "ssl", 00:19:03.486 "recv_buf_size": 4096, 00:19:03.486 "send_buf_size": 4096, 00:19:03.486 "enable_recv_pipe": true, 00:19:03.486 "enable_quickack": false, 00:19:03.486 "enable_placement_id": 0, 00:19:03.486 "enable_zerocopy_send_server": true, 00:19:03.486 "enable_zerocopy_send_client": false, 00:19:03.486 "zerocopy_threshold": 0, 00:19:03.486 "tls_version": 0, 00:19:03.486 "enable_ktls": false 00:19:03.486 } 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "method": "sock_impl_set_options", 00:19:03.486 "params": { 00:19:03.486 "impl_name": "posix", 00:19:03.486 "recv_buf_size": 2097152, 00:19:03.486 "send_buf_size": 2097152, 00:19:03.487 "enable_recv_pipe": true, 00:19:03.487 "enable_quickack": false, 00:19:03.487 "enable_placement_id": 0, 00:19:03.487 "enable_zerocopy_send_server": true, 00:19:03.487 "enable_zerocopy_send_client": false, 00:19:03.487 "zerocopy_threshold": 0, 00:19:03.487 "tls_version": 0, 00:19:03.487 "enable_ktls": false 00:19:03.487 } 00:19:03.487 } 00:19:03.487 ] 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "subsystem": "vmd", 00:19:03.487 "config": [] 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "subsystem": "accel", 00:19:03.487 "config": [ 00:19:03.487 { 00:19:03.487 "method": "accel_set_options", 00:19:03.487 "params": { 00:19:03.487 "small_cache_size": 128, 00:19:03.487 "large_cache_size": 16, 00:19:03.487 "task_count": 2048, 00:19:03.487 "sequence_count": 2048, 00:19:03.487 "buf_count": 2048 00:19:03.487 } 00:19:03.487 } 00:19:03.487 ] 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "subsystem": "bdev", 00:19:03.487 "config": [ 00:19:03.487 { 00:19:03.487 "method": "bdev_set_options", 00:19:03.487 "params": { 00:19:03.487 "bdev_io_pool_size": 65535, 00:19:03.487 "bdev_io_cache_size": 256, 00:19:03.487 "bdev_auto_examine": true, 00:19:03.487 "iobuf_small_cache_size": 128, 00:19:03.487 "iobuf_large_cache_size": 16 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_raid_set_options", 00:19:03.487 "params": { 00:19:03.487 "process_window_size_kb": 1024, 00:19:03.487 "process_max_bandwidth_mb_sec": 0 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_iscsi_set_options", 00:19:03.487 "params": { 00:19:03.487 "timeout_sec": 30 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_nvme_set_options", 00:19:03.487 "params": { 00:19:03.487 "action_on_timeout": "none", 00:19:03.487 "timeout_us": 0, 00:19:03.487 "timeout_admin_us": 0, 00:19:03.487 "keep_alive_timeout_ms": 10000, 00:19:03.487 "arbitration_burst": 0, 00:19:03.487 "low_priority_weight": 0, 00:19:03.487 "medium_priority_weight": 0, 00:19:03.487 "high_priority_weight": 0, 00:19:03.487 "nvme_adminq_poll_period_us": 10000, 00:19:03.487 "nvme_ioq_poll_period_us": 0, 00:19:03.487 "io_queue_requests": 512, 00:19:03.487 "delay_cmd_submit": true, 00:19:03.487 "transport_retry_count": 4, 00:19:03.487 "bdev_retry_count": 3, 00:19:03.487 "transport_ack_timeout": 0, 00:19:03.487 "ctrlr_loss_timeout_sec": 0, 00:19:03.487 "reconnect_delay_sec": 0, 00:19:03.487 "fast_io_fail_timeout_sec": 0, 00:19:03.487 "disable_auto_failback": false, 00:19:03.487 "generate_uuids": false, 00:19:03.487 "transport_tos": 0, 00:19:03.487 "nvme_error_stat": false, 00:19:03.487 "rdma_srq_size": 0, 00:19:03.487 "io_path_stat": false, 00:19:03.487 "allow_accel_sequence": false, 00:19:03.487 "rdma_max_cq_size": 0, 00:19:03.487 "rdma_cm_event_timeout_ms": 0, 00:19:03.487 "dhchap_digests": [ 00:19:03.487 "sha256", 00:19:03.487 "sha384", 00:19:03.487 "sha512" 00:19:03.487 ], 00:19:03.487 "dhchap_dhgroups": [ 00:19:03.487 "null", 00:19:03.487 "ffdhe2048", 00:19:03.487 "ffdhe3072", 00:19:03.487 "ffdhe4096", 00:19:03.487 "ffdhe6144", 00:19:03.487 "ffdhe8192" 00:19:03.487 ] 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_nvme_attach_controller", 00:19:03.487 "params": { 00:19:03.487 "name": "nvme0", 00:19:03.487 "trtype": "TCP", 00:19:03.487 "adrfam": "IPv4", 00:19:03.487 "traddr": "10.0.0.2", 00:19:03.487 "trsvcid": "4420", 00:19:03.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.487 "prchk_reftag": false, 00:19:03.487 "prchk_guard": false, 00:19:03.487 "ctrlr_loss_timeout_sec": 0, 00:19:03.487 "reconnect_delay_sec": 0, 00:19:03.487 "fast_io_fail_timeout_sec": 0, 00:19:03.487 "psk": "key0", 00:19:03.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.487 "hdgst": false, 00:19:03.487 "ddgst": false, 00:19:03.487 "multipath": "multipath" 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_nvme_set_hotplug", 00:19:03.487 "params": { 00:19:03.487 "period_us": 100000, 00:19:03.487 "enable": false 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_enable_histogram", 00:19:03.487 "params": { 00:19:03.487 "name": "nvme0n1", 00:19:03.487 "enable": true 00:19:03.487 } 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "method": "bdev_wait_for_examine" 00:19:03.487 } 00:19:03.487 ] 00:19:03.487 }, 00:19:03.487 { 00:19:03.487 "subsystem": "nbd", 00:19:03.487 "config": [] 00:19:03.487 } 00:19:03.487 ] 00:19:03.487 }' 00:19:03.487 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2759603 00:19:03.487 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2759603 ']' 00:19:03.487 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2759603 00:19:03.487 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.746 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.746 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759603 00:19:03.746 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.746 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.746 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759603' 00:19:03.746 killing process with pid 2759603 00:19:03.746 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2759603 00:19:03.746 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.746 00:19:03.747 Latency(us) 00:19:03.747 [2024-11-20T15:12:04.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.747 [2024-11-20T15:12:04.584Z] =================================================================================================================== 00:19:03.747 [2024-11-20T15:12:04.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2759603 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2759565 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2759565 ']' 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2759565 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.747 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759565 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759565' 00:19:04.006 killing process with pid 2759565 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2759565 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2759565 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.006 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:04.006 "subsystems": [ 00:19:04.006 { 00:19:04.006 "subsystem": "keyring", 00:19:04.006 "config": [ 00:19:04.006 { 00:19:04.006 "method": "keyring_file_add_key", 00:19:04.006 "params": { 00:19:04.006 "name": "key0", 00:19:04.006 "path": "/tmp/tmp.GlzvcAtcUL" 00:19:04.006 } 00:19:04.006 } 00:19:04.006 ] 00:19:04.006 }, 00:19:04.006 { 00:19:04.006 "subsystem": "iobuf", 00:19:04.006 "config": [ 00:19:04.006 { 00:19:04.006 "method": "iobuf_set_options", 00:19:04.006 "params": { 00:19:04.006 "small_pool_count": 8192, 00:19:04.006 "large_pool_count": 1024, 00:19:04.006 "small_bufsize": 8192, 00:19:04.006 "large_bufsize": 135168, 00:19:04.006 "enable_numa": false 00:19:04.006 } 00:19:04.006 } 00:19:04.006 ] 00:19:04.006 }, 00:19:04.006 { 00:19:04.006 "subsystem": "sock", 00:19:04.006 "config": [ 00:19:04.006 { 00:19:04.006 "method": "sock_set_default_impl", 00:19:04.006 "params": { 00:19:04.006 "impl_name": "posix" 00:19:04.006 } 00:19:04.006 }, 00:19:04.006 { 00:19:04.006 "method": "sock_impl_set_options", 00:19:04.006 "params": { 00:19:04.006 "impl_name": "ssl", 00:19:04.006 "recv_buf_size": 4096, 00:19:04.006 "send_buf_size": 4096, 00:19:04.006 "enable_recv_pipe": true, 00:19:04.006 "enable_quickack": false, 00:19:04.006 "enable_placement_id": 0, 00:19:04.007 "enable_zerocopy_send_server": true, 00:19:04.007 "enable_zerocopy_send_client": false, 00:19:04.007 "zerocopy_threshold": 0, 00:19:04.007 "tls_version": 0, 00:19:04.007 "enable_ktls": false 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "sock_impl_set_options", 00:19:04.007 "params": { 00:19:04.007 "impl_name": "posix", 00:19:04.007 "recv_buf_size": 2097152, 00:19:04.007 "send_buf_size": 2097152, 00:19:04.007 "enable_recv_pipe": true, 00:19:04.007 "enable_quickack": false, 00:19:04.007 "enable_placement_id": 0, 00:19:04.007 "enable_zerocopy_send_server": true, 00:19:04.007 "enable_zerocopy_send_client": false, 00:19:04.007 "zerocopy_threshold": 0, 00:19:04.007 "tls_version": 0, 00:19:04.007 "enable_ktls": false 00:19:04.007 } 00:19:04.007 } 00:19:04.007 ] 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "subsystem": "vmd", 00:19:04.007 "config": [] 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "subsystem": "accel", 00:19:04.007 "config": [ 00:19:04.007 { 00:19:04.007 "method": "accel_set_options", 00:19:04.007 "params": { 00:19:04.007 "small_cache_size": 128, 00:19:04.007 "large_cache_size": 16, 00:19:04.007 "task_count": 2048, 00:19:04.007 "sequence_count": 2048, 00:19:04.007 "buf_count": 2048 00:19:04.007 } 00:19:04.007 } 00:19:04.007 ] 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "subsystem": "bdev", 00:19:04.007 "config": [ 00:19:04.007 { 00:19:04.007 "method": "bdev_set_options", 00:19:04.007 "params": { 00:19:04.007 "bdev_io_pool_size": 65535, 00:19:04.007 "bdev_io_cache_size": 256, 00:19:04.007 "bdev_auto_examine": true, 00:19:04.007 "iobuf_small_cache_size": 128, 00:19:04.007 "iobuf_large_cache_size": 16 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "bdev_raid_set_options", 00:19:04.007 "params": { 00:19:04.007 "process_window_size_kb": 1024, 00:19:04.007 "process_max_bandwidth_mb_sec": 0 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "bdev_iscsi_set_options", 00:19:04.007 "params": { 00:19:04.007 "timeout_sec": 30 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "bdev_nvme_set_options", 00:19:04.007 "params": { 00:19:04.007 "action_on_timeout": "none", 00:19:04.007 "timeout_us": 0, 00:19:04.007 "timeout_admin_us": 0, 00:19:04.007 "keep_alive_timeout_ms": 10000, 00:19:04.007 "arbitration_burst": 0, 00:19:04.007 "low_priority_weight": 0, 00:19:04.007 "medium_priority_weight": 0, 00:19:04.007 "high_priority_weight": 0, 00:19:04.007 "nvme_adminq_poll_period_us": 10000, 00:19:04.007 "nvme_ioq_poll_period_us": 0, 00:19:04.007 "io_queue_requests": 0, 00:19:04.007 "delay_cmd_submit": true, 00:19:04.007 "transport_retry_count": 4, 00:19:04.007 "bdev_retry_count": 3, 00:19:04.007 "transport_ack_timeout": 0, 00:19:04.007 "ctrlr_loss_timeout_sec": 0, 00:19:04.007 "reconnect_delay_sec": 0, 00:19:04.007 "fast_io_fail_timeout_sec": 0, 00:19:04.007 "disable_auto_failback": false, 00:19:04.007 "generate_uuids": false, 00:19:04.007 "transport_tos": 0, 00:19:04.007 "nvme_error_stat": false, 00:19:04.007 "rdma_srq_size": 0, 00:19:04.007 "io_path_stat": false, 00:19:04.007 "allow_accel_sequence": false, 00:19:04.007 "rdma_max_cq_size": 0, 00:19:04.007 "rdma_cm_event_timeout_ms": 0, 00:19:04.007 "dhchap_digests": [ 00:19:04.007 "sha256", 00:19:04.007 "sha384", 00:19:04.007 "sha512" 00:19:04.007 ], 00:19:04.007 "dhchap_dhgroups": [ 00:19:04.007 "null", 00:19:04.007 "ffdhe2048", 00:19:04.007 "ffdhe3072", 00:19:04.007 "ffdhe4096", 00:19:04.007 "ffdhe6144", 00:19:04.007 "ffdhe8192" 00:19:04.007 ] 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "bdev_nvme_set_hotplug", 00:19:04.007 "params": { 00:19:04.007 "period_us": 100000, 00:19:04.007 "enable": false 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "bdev_malloc_create", 00:19:04.007 "params": { 00:19:04.007 "name": "malloc0", 00:19:04.007 "num_blocks": 8192, 00:19:04.007 "block_size": 4096, 00:19:04.007 "physical_block_size": 4096, 00:19:04.007 "uuid": "f887d5bf-1fe2-4302-9174-29f397b979c3", 00:19:04.007 "optimal_io_boundary": 0, 00:19:04.007 "md_size": 0, 00:19:04.007 "dif_type": 0, 00:19:04.007 "dif_is_head_of_md": false, 00:19:04.007 "dif_pi_format": 0 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "bdev_wait_for_examine" 00:19:04.007 } 00:19:04.007 ] 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "subsystem": "nbd", 00:19:04.007 "config": [] 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "subsystem": "scheduler", 00:19:04.007 "config": [ 00:19:04.007 { 00:19:04.007 "method": "framework_set_scheduler", 00:19:04.007 "params": { 00:19:04.007 "name": "static" 00:19:04.007 } 00:19:04.007 } 00:19:04.007 ] 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "subsystem": "nvmf", 00:19:04.007 "config": [ 00:19:04.007 { 00:19:04.007 "method": "nvmf_set_config", 00:19:04.007 "params": { 00:19:04.007 "discovery_filter": "match_any", 00:19:04.007 "admin_cmd_passthru": { 00:19:04.007 "identify_ctrlr": false 00:19:04.007 }, 00:19:04.007 "dhchap_digests": [ 00:19:04.007 "sha256", 00:19:04.007 "sha384", 00:19:04.007 "sha512" 00:19:04.007 ], 00:19:04.007 "dhchap_dhgroups": [ 00:19:04.007 "null", 00:19:04.007 "ffdhe2048", 00:19:04.007 "ffdhe3072", 00:19:04.007 "ffdhe4096", 00:19:04.007 "ffdhe6144", 00:19:04.007 "ffdhe8192" 00:19:04.007 ] 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_set_max_subsystems", 00:19:04.007 "params": { 00:19:04.007 "max_subsystems": 1024 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_set_crdt", 00:19:04.007 "params": { 00:19:04.007 "crdt1": 0, 00:19:04.007 "crdt2": 0, 00:19:04.007 "crdt3": 0 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_create_transport", 00:19:04.007 "params": { 00:19:04.007 "trtype": "TCP", 00:19:04.007 "max_queue_depth": 128, 00:19:04.007 "max_io_qpairs_per_ctrlr": 127, 00:19:04.007 "in_capsule_data_size": 4096, 00:19:04.007 "max_io_size": 131072, 00:19:04.007 "io_unit_size": 131072, 00:19:04.007 "max_aq_depth": 128, 00:19:04.007 "num_shared_buffers": 511, 00:19:04.007 "buf_cache_size": 4294967295, 00:19:04.007 "dif_insert_or_strip": false, 00:19:04.007 "zcopy": false, 00:19:04.007 "c2h_success": false, 00:19:04.007 "sock_priority": 0, 00:19:04.007 "abort_timeout_sec": 1, 00:19:04.007 "ack_timeout": 0, 00:19:04.007 "data_wr_pool_size": 0 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_create_subsystem", 00:19:04.007 "params": { 00:19:04.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.007 "allow_any_host": false, 00:19:04.007 "serial_number": "00000000000000000000", 00:19:04.007 "model_number": "SPDK bdev Controller", 00:19:04.007 "max_namespaces": 32, 00:19:04.007 "min_cntlid": 1, 00:19:04.007 "max_cntlid": 65519, 00:19:04.007 "ana_reporting": false 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_subsystem_add_host", 00:19:04.007 "params": { 00:19:04.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.007 "host": "nqn.2016-06.io.spdk:host1", 00:19:04.007 "psk": "key0" 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_subsystem_add_ns", 00:19:04.007 "params": { 00:19:04.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.007 "namespace": { 00:19:04.007 "nsid": 1, 00:19:04.007 "bdev_name": "malloc0", 00:19:04.007 "nguid": "F887D5BF1FE24302917429F397B979C3", 00:19:04.007 "uuid": "f887d5bf-1fe2-4302-9174-29f397b979c3", 00:19:04.007 "no_auto_visible": false 00:19:04.007 } 00:19:04.007 } 00:19:04.007 }, 00:19:04.007 { 00:19:04.007 "method": "nvmf_subsystem_add_listener", 00:19:04.007 "params": { 00:19:04.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.007 "listen_address": { 00:19:04.007 "trtype": "TCP", 00:19:04.007 "adrfam": "IPv4", 00:19:04.007 "traddr": "10.0.0.2", 00:19:04.007 "trsvcid": "4420" 00:19:04.007 }, 00:19:04.007 "secure_channel": false, 00:19:04.007 "sock_impl": "ssl" 00:19:04.007 } 00:19:04.007 } 00:19:04.007 ] 00:19:04.007 } 00:19:04.007 ] 00:19:04.007 }' 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2760079 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2760079 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2760079 ']' 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.007 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.008 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.008 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.008 [2024-11-20 16:12:04.811498] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:04.008 [2024-11-20 16:12:04.811544] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.267 [2024-11-20 16:12:04.891770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.267 [2024-11-20 16:12:04.932785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.267 [2024-11-20 16:12:04.932824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.267 [2024-11-20 16:12:04.932831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.267 [2024-11-20 16:12:04.932837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.267 [2024-11-20 16:12:04.932842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.267 [2024-11-20 16:12:04.933442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.526 [2024-11-20 16:12:05.146909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.526 [2024-11-20 16:12:05.178944] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.526 [2024-11-20 16:12:05.179154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2760324 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2760324 /var/tmp/bdevperf.sock 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2760324 ']' 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.096 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:05.096 "subsystems": [ 00:19:05.096 { 00:19:05.096 "subsystem": "keyring", 00:19:05.096 "config": [ 00:19:05.096 { 00:19:05.096 "method": "keyring_file_add_key", 00:19:05.096 "params": { 00:19:05.096 "name": "key0", 00:19:05.096 "path": "/tmp/tmp.GlzvcAtcUL" 00:19:05.096 } 00:19:05.096 } 00:19:05.096 ] 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "subsystem": "iobuf", 00:19:05.096 "config": [ 00:19:05.096 { 00:19:05.096 "method": "iobuf_set_options", 00:19:05.096 "params": { 00:19:05.096 "small_pool_count": 8192, 00:19:05.096 "large_pool_count": 1024, 00:19:05.096 "small_bufsize": 8192, 00:19:05.096 "large_bufsize": 135168, 00:19:05.096 "enable_numa": false 00:19:05.096 } 00:19:05.096 } 00:19:05.096 ] 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "subsystem": "sock", 00:19:05.096 "config": [ 00:19:05.096 { 00:19:05.096 "method": "sock_set_default_impl", 00:19:05.096 "params": { 00:19:05.096 "impl_name": "posix" 00:19:05.096 } 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "method": "sock_impl_set_options", 00:19:05.096 "params": { 00:19:05.096 "impl_name": "ssl", 00:19:05.096 "recv_buf_size": 4096, 00:19:05.096 "send_buf_size": 4096, 00:19:05.096 "enable_recv_pipe": true, 00:19:05.096 "enable_quickack": false, 00:19:05.096 "enable_placement_id": 0, 00:19:05.096 "enable_zerocopy_send_server": true, 00:19:05.096 "enable_zerocopy_send_client": false, 00:19:05.096 "zerocopy_threshold": 0, 00:19:05.096 "tls_version": 0, 00:19:05.096 "enable_ktls": false 00:19:05.096 } 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "method": "sock_impl_set_options", 00:19:05.096 "params": { 00:19:05.096 "impl_name": "posix", 00:19:05.096 "recv_buf_size": 2097152, 00:19:05.096 "send_buf_size": 2097152, 00:19:05.096 "enable_recv_pipe": true, 00:19:05.096 "enable_quickack": false, 00:19:05.096 "enable_placement_id": 0, 00:19:05.096 "enable_zerocopy_send_server": true, 00:19:05.096 "enable_zerocopy_send_client": false, 00:19:05.096 "zerocopy_threshold": 0, 00:19:05.096 "tls_version": 0, 00:19:05.096 "enable_ktls": false 00:19:05.096 } 00:19:05.096 } 00:19:05.096 ] 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "subsystem": "vmd", 00:19:05.096 "config": [] 00:19:05.096 }, 00:19:05.096 { 00:19:05.096 "subsystem": "accel", 00:19:05.096 "config": [ 00:19:05.096 { 00:19:05.096 "method": "accel_set_options", 00:19:05.096 "params": { 00:19:05.096 "small_cache_size": 128, 00:19:05.096 "large_cache_size": 16, 00:19:05.096 "task_count": 2048, 00:19:05.096 "sequence_count": 2048, 00:19:05.096 "buf_count": 2048 00:19:05.096 } 00:19:05.096 } 00:19:05.096 ] 00:19:05.096 }, 00:19:05.097 { 00:19:05.097 "subsystem": "bdev", 00:19:05.097 "config": [ 00:19:05.097 { 00:19:05.097 "method": "bdev_set_options", 00:19:05.097 "params": { 00:19:05.097 "bdev_io_pool_size": 65535, 00:19:05.097 "bdev_io_cache_size": 256, 00:19:05.097 "bdev_auto_examine": true, 00:19:05.097 "iobuf_small_cache_size": 128, 00:19:05.097 "iobuf_large_cache_size": 16 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_raid_set_options", 00:19:05.097 "params": { 00:19:05.097 "process_window_size_kb": 1024, 00:19:05.097 "process_max_bandwidth_mb_sec": 0 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_iscsi_set_options", 00:19:05.097 "params": { 00:19:05.097 "timeout_sec": 30 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_nvme_set_options", 00:19:05.097 "params": { 00:19:05.097 "action_on_timeout": "none", 00:19:05.097 "timeout_us": 0, 00:19:05.097 "timeout_admin_us": 0, 00:19:05.097 "keep_alive_timeout_ms": 10000, 00:19:05.097 "arbitration_burst": 0, 00:19:05.097 "low_priority_weight": 0, 00:19:05.097 "medium_priority_weight": 0, 00:19:05.097 "high_priority_weight": 0, 00:19:05.097 "nvme_adminq_poll_period_us": 10000, 00:19:05.097 "nvme_ioq_poll_period_us": 0, 00:19:05.097 "io_queue_requests": 512, 00:19:05.097 "delay_cmd_submit": true, 00:19:05.097 "transport_retry_count": 4, 00:19:05.097 "bdev_retry_count": 3, 00:19:05.097 "transport_ack_timeout": 0, 00:19:05.097 "ctrlr_loss_timeout_sec": 0, 00:19:05.097 "reconnect_delay_sec": 0, 00:19:05.097 "fast_io_fail_timeout_sec": 0, 00:19:05.097 "disable_auto_failback": false, 00:19:05.097 "generate_uuids": false, 00:19:05.097 "transport_tos": 0, 00:19:05.097 "nvme_error_stat": false, 00:19:05.097 "rdma_srq_size": 0, 00:19:05.097 "io_path_stat": false, 00:19:05.097 "allow_accel_sequence": false, 00:19:05.097 "rdma_max_cq_size": 0, 00:19:05.097 "rdma_cm_event_timeout_ms": 0, 00:19:05.097 "dhchap_digests": [ 00:19:05.097 "sha256", 00:19:05.097 "sha384", 00:19:05.097 "sha512" 00:19:05.097 ], 00:19:05.097 "dhchap_dhgroups": [ 00:19:05.097 "null", 00:19:05.097 "ffdhe2048", 00:19:05.097 "ffdhe3072", 00:19:05.097 "ffdhe4096", 00:19:05.097 "ffdhe6144", 00:19:05.097 "ffdhe8192" 00:19:05.097 ] 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_nvme_attach_controller", 00:19:05.097 "params": { 00:19:05.097 "name": "nvme0", 00:19:05.097 "trtype": "TCP", 00:19:05.097 "adrfam": "IPv4", 00:19:05.097 "traddr": "10.0.0.2", 00:19:05.097 "trsvcid": "4420", 00:19:05.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.097 "prchk_reftag": false, 00:19:05.097 "prchk_guard": false, 00:19:05.097 "ctrlr_loss_timeout_sec": 0, 00:19:05.097 "reconnect_delay_sec": 0, 00:19:05.097 "fast_io_fail_timeout_sec": 0, 00:19:05.097 "psk": "key0", 00:19:05.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.097 "hdgst": false, 00:19:05.097 "ddgst": false, 00:19:05.097 "multipath": "multipath" 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_nvme_set_hotplug", 00:19:05.097 "params": { 00:19:05.097 "period_us": 100000, 00:19:05.097 "enable": false 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_enable_histogram", 00:19:05.097 "params": { 00:19:05.097 "name": "nvme0n1", 00:19:05.097 "enable": true 00:19:05.097 } 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "method": "bdev_wait_for_examine" 00:19:05.097 } 00:19:05.097 ] 00:19:05.097 }, 00:19:05.097 { 00:19:05.097 "subsystem": "nbd", 00:19:05.097 "config": [] 00:19:05.097 } 00:19:05.097 ] 00:19:05.097 }' 00:19:05.097 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.097 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.097 [2024-11-20 16:12:05.729266] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:05.097 [2024-11-20 16:12:05.729316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760324 ] 00:19:05.097 [2024-11-20 16:12:05.804472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.097 [2024-11-20 16:12:05.845807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.356 [2024-11-20 16:12:06.000744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.925 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.925 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.925 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:05.925 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:06.184 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.184 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.184 Running I/O for 1 seconds... 00:19:07.132 5271.00 IOPS, 20.59 MiB/s 00:19:07.132 Latency(us) 00:19:07.132 [2024-11-20T15:12:07.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.132 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.132 Verification LBA range: start 0x0 length 0x2000 00:19:07.132 nvme0n1 : 1.01 5329.49 20.82 0.00 0.00 23852.69 5527.82 21541.40 00:19:07.132 [2024-11-20T15:12:07.969Z] =================================================================================================================== 00:19:07.132 [2024-11-20T15:12:07.969Z] Total : 5329.49 20.82 0.00 0.00 23852.69 5527.82 21541.40 00:19:07.132 { 00:19:07.132 "results": [ 00:19:07.132 { 00:19:07.132 "job": "nvme0n1", 00:19:07.132 "core_mask": "0x2", 00:19:07.132 "workload": "verify", 00:19:07.132 "status": "finished", 00:19:07.132 "verify_range": { 00:19:07.132 "start": 0, 00:19:07.132 "length": 8192 00:19:07.132 }, 00:19:07.132 "queue_depth": 128, 00:19:07.132 "io_size": 4096, 00:19:07.132 "runtime": 1.01323, 00:19:07.132 "iops": 5329.4908362365895, 00:19:07.132 "mibps": 20.818323579049178, 00:19:07.132 "io_failed": 0, 00:19:07.132 "io_timeout": 0, 00:19:07.132 "avg_latency_us": 23852.69257069243, 00:19:07.132 "min_latency_us": 5527.819130434783, 00:19:07.132 "max_latency_us": 21541.398260869566 00:19:07.132 } 00:19:07.132 ], 00:19:07.132 "core_count": 1 00:19:07.132 } 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:07.132 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:07.132 nvmf_trace.0 00:19:07.391 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2760324 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2760324 ']' 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2760324 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760324 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760324' 00:19:07.391 killing process with pid 2760324 00:19:07.391 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2760324 00:19:07.392 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.392 00:19:07.392 Latency(us) 00:19:07.392 [2024-11-20T15:12:08.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.392 [2024-11-20T15:12:08.229Z] =================================================================================================================== 00:19:07.392 [2024-11-20T15:12:08.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2760324 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.392 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.392 rmmod nvme_tcp 00:19:07.651 rmmod nvme_fabrics 00:19:07.651 rmmod nvme_keyring 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2760079 ']' 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2760079 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2760079 ']' 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2760079 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760079 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760079' 00:19:07.651 killing process with pid 2760079 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2760079 00:19:07.651 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2760079 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.910 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qYSdpV8dyp /tmp/tmp.Lzmjv981ND /tmp/tmp.GlzvcAtcUL 00:19:09.817 00:19:09.817 real 1m19.611s 00:19:09.817 user 2m1.293s 00:19:09.817 sys 0m31.256s 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.817 ************************************ 00:19:09.817 END TEST nvmf_tls 00:19:09.817 ************************************ 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.817 16:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.818 16:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.818 ************************************ 00:19:09.818 START TEST nvmf_fips 00:19:09.818 ************************************ 00:19:09.818 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:10.078 * Looking for test storage... 00:19:10.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.078 --rc genhtml_branch_coverage=1 00:19:10.078 --rc genhtml_function_coverage=1 00:19:10.078 --rc genhtml_legend=1 00:19:10.078 --rc geninfo_all_blocks=1 00:19:10.078 --rc geninfo_unexecuted_blocks=1 00:19:10.078 00:19:10.078 ' 00:19:10.078 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.078 --rc genhtml_branch_coverage=1 00:19:10.078 --rc genhtml_function_coverage=1 00:19:10.079 --rc genhtml_legend=1 00:19:10.079 --rc geninfo_all_blocks=1 00:19:10.079 --rc geninfo_unexecuted_blocks=1 00:19:10.079 00:19:10.079 ' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.079 --rc genhtml_branch_coverage=1 00:19:10.079 --rc genhtml_function_coverage=1 00:19:10.079 --rc genhtml_legend=1 00:19:10.079 --rc geninfo_all_blocks=1 00:19:10.079 --rc geninfo_unexecuted_blocks=1 00:19:10.079 00:19:10.079 ' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.079 --rc genhtml_branch_coverage=1 00:19:10.079 --rc genhtml_function_coverage=1 00:19:10.079 --rc genhtml_legend=1 00:19:10.079 --rc geninfo_all_blocks=1 00:19:10.079 --rc geninfo_unexecuted_blocks=1 00:19:10.079 00:19:10.079 ' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:10.079 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:10.080 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:10.339 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:10.339 Error setting digest 00:19:10.339 40624F49947F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:10.339 40624F49947F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:10.339 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:16.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:16.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.915 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:16.916 Found net devices under 0000:86:00.0: cvl_0_0 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:16.916 Found net devices under 0000:86:00.1: cvl_0_1 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:16.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:19:16.916 00:19:16.916 --- 10.0.0.2 ping statistics --- 00:19:16.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.916 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:19:16.916 00:19:16.916 --- 10.0.0.1 ping statistics --- 00:19:16.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.916 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2764737 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2764737 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2764737 ']' 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.916 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:16.916 [2024-11-20 16:12:17.059375] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:16.916 [2024-11-20 16:12:17.059425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.916 [2024-11-20 16:12:17.135492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.916 [2024-11-20 16:12:17.173846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.916 [2024-11-20 16:12:17.173884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.916 [2024-11-20 16:12:17.173891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.916 [2024-11-20 16:12:17.173897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.916 [2024-11-20 16:12:17.173902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.916 [2024-11-20 16:12:17.174505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.fxS 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.fxS 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.fxS 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.fxS 00:19:17.176 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:17.436 [2024-11-20 16:12:18.100477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.436 [2024-11-20 16:12:18.116482] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.436 [2024-11-20 16:12:18.116672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.436 malloc0 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2764893 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2764893 /var/tmp/bdevperf.sock 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2764893 ']' 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.436 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:17.436 [2024-11-20 16:12:18.236955] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:17.436 [2024-11-20 16:12:18.237006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764893 ] 00:19:17.695 [2024-11-20 16:12:18.312393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.695 [2024-11-20 16:12:18.353513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.695 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.695 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:17.695 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.fxS 00:19:17.955 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.213 [2024-11-20 16:12:18.817532] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.213 TLSTESTn1 00:19:18.213 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:18.213 Running I/O for 10 seconds... 00:19:20.531 5312.00 IOPS, 20.75 MiB/s [2024-11-20T15:12:22.305Z] 5370.50 IOPS, 20.98 MiB/s [2024-11-20T15:12:23.320Z] 5381.00 IOPS, 21.02 MiB/s [2024-11-20T15:12:24.314Z] 5378.50 IOPS, 21.01 MiB/s [2024-11-20T15:12:25.248Z] 5272.20 IOPS, 20.59 MiB/s [2024-11-20T15:12:26.184Z] 5121.00 IOPS, 20.00 MiB/s [2024-11-20T15:12:27.122Z] 5035.57 IOPS, 19.67 MiB/s [2024-11-20T15:12:28.059Z] 4921.62 IOPS, 19.23 MiB/s [2024-11-20T15:12:29.437Z] 4810.78 IOPS, 18.79 MiB/s [2024-11-20T15:12:29.437Z] 4740.30 IOPS, 18.52 MiB/s 00:19:28.600 Latency(us) 00:19:28.600 [2024-11-20T15:12:29.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.600 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.600 Verification LBA range: start 0x0 length 0x2000 00:19:28.600 TLSTESTn1 : 10.03 4736.56 18.50 0.00 0.00 26966.27 5100.41 31913.18 00:19:28.600 [2024-11-20T15:12:29.437Z] =================================================================================================================== 00:19:28.600 [2024-11-20T15:12:29.437Z] Total : 4736.56 18.50 0.00 0.00 26966.27 5100.41 31913.18 00:19:28.600 { 00:19:28.600 "results": [ 00:19:28.600 { 00:19:28.600 "job": "TLSTESTn1", 00:19:28.600 "core_mask": "0x4", 00:19:28.600 "workload": "verify", 00:19:28.600 "status": "finished", 00:19:28.600 "verify_range": { 00:19:28.600 "start": 0, 00:19:28.600 "length": 8192 00:19:28.600 }, 00:19:28.600 "queue_depth": 128, 00:19:28.600 "io_size": 4096, 00:19:28.600 "runtime": 10.0347, 00:19:28.600 "iops": 4736.564122494942, 00:19:28.600 "mibps": 18.502203603495868, 00:19:28.600 "io_failed": 0, 00:19:28.600 "io_timeout": 0, 00:19:28.600 "avg_latency_us": 26966.272414493364, 00:19:28.600 "min_latency_us": 5100.410434782609, 00:19:28.600 "max_latency_us": 31913.182608695653 00:19:28.600 } 00:19:28.600 ], 00:19:28.600 "core_count": 1 00:19:28.600 } 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:28.600 nvmf_trace.0 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2764893 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2764893 ']' 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2764893 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2764893 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2764893' 00:19:28.600 killing process with pid 2764893 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2764893 00:19:28.600 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.600 00:19:28.600 Latency(us) 00:19:28.600 [2024-11-20T15:12:29.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.600 [2024-11-20T15:12:29.437Z] =================================================================================================================== 00:19:28.600 [2024-11-20T15:12:29.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2764893 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.600 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.600 rmmod nvme_tcp 00:19:28.600 rmmod nvme_fabrics 00:19:28.600 rmmod nvme_keyring 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2764737 ']' 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2764737 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2764737 ']' 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2764737 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2764737 00:19:28.859 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2764737' 00:19:28.860 killing process with pid 2764737 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2764737 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2764737 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.860 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.fxS 00:19:31.396 00:19:31.396 real 0m21.098s 00:19:31.396 user 0m21.533s 00:19:31.396 sys 0m10.218s 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.396 ************************************ 00:19:31.396 END TEST nvmf_fips 00:19:31.396 ************************************ 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.396 ************************************ 00:19:31.396 START TEST nvmf_control_msg_list 00:19:31.396 ************************************ 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:31.396 * Looking for test storage... 00:19:31.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.396 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.397 --rc genhtml_branch_coverage=1 00:19:31.397 --rc genhtml_function_coverage=1 00:19:31.397 --rc genhtml_legend=1 00:19:31.397 --rc geninfo_all_blocks=1 00:19:31.397 --rc geninfo_unexecuted_blocks=1 00:19:31.397 00:19:31.397 ' 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.397 --rc genhtml_branch_coverage=1 00:19:31.397 --rc genhtml_function_coverage=1 00:19:31.397 --rc genhtml_legend=1 00:19:31.397 --rc geninfo_all_blocks=1 00:19:31.397 --rc geninfo_unexecuted_blocks=1 00:19:31.397 00:19:31.397 ' 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.397 --rc genhtml_branch_coverage=1 00:19:31.397 --rc genhtml_function_coverage=1 00:19:31.397 --rc genhtml_legend=1 00:19:31.397 --rc geninfo_all_blocks=1 00:19:31.397 --rc geninfo_unexecuted_blocks=1 00:19:31.397 00:19:31.397 ' 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.397 --rc genhtml_branch_coverage=1 00:19:31.397 --rc genhtml_function_coverage=1 00:19:31.397 --rc genhtml_legend=1 00:19:31.397 --rc geninfo_all_blocks=1 00:19:31.397 --rc geninfo_unexecuted_blocks=1 00:19:31.397 00:19:31.397 ' 00:19:31.397 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.397 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.398 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:37.968 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:37.969 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:37.969 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:37.969 Found net devices under 0000:86:00.0: cvl_0_0 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:37.969 Found net devices under 0000:86:00.1: cvl_0_1 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:19:37.969 00:19:37.969 --- 10.0.0.2 ping statistics --- 00:19:37.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.969 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:19:37.969 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:19:37.969 00:19:37.970 --- 10.0.0.1 ping statistics --- 00:19:37.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.970 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2770171 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2770171 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2770171 ']' 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.970 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 [2024-11-20 16:12:38.034260] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:37.970 [2024-11-20 16:12:38.034316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.970 [2024-11-20 16:12:38.120730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.970 [2024-11-20 16:12:38.161476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.970 [2024-11-20 16:12:38.161514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.970 [2024-11-20 16:12:38.161521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.970 [2024-11-20 16:12:38.161527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.970 [2024-11-20 16:12:38.161532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.970 [2024-11-20 16:12:38.162081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 [2024-11-20 16:12:38.297641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 Malloc0 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 [2024-11-20 16:12:38.337898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2770380 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2770381 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2770382 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2770380 00:19:37.970 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.970 [2024-11-20 16:12:38.416459] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:37.970 [2024-11-20 16:12:38.416630] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:37.970 [2024-11-20 16:12:38.426207] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:38.906 Initializing NVMe Controllers 00:19:38.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:38.906 Initialization complete. Launching workers. 00:19:38.906 ======================================================== 00:19:38.906 Latency(us) 00:19:38.906 Device Information : IOPS MiB/s Average min max 00:19:38.906 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6113.00 23.88 163.24 124.72 359.10 00:19:38.906 ======================================================== 00:19:38.906 Total : 6113.00 23.88 163.24 124.72 359.10 00:19:38.906 00:19:38.906 Initializing NVMe Controllers 00:19:38.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:38.906 Initialization complete. Launching workers. 00:19:38.906 ======================================================== 00:19:38.906 Latency(us) 00:19:38.906 Device Information : IOPS MiB/s Average min max 00:19:38.906 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40924.90 40580.13 41851.91 00:19:38.906 ======================================================== 00:19:38.906 Total : 25.00 0.10 40924.90 40580.13 41851.91 00:19:38.906 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2770381 00:19:38.907 Initializing NVMe Controllers 00:19:38.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:38.907 Initialization complete. Launching workers. 00:19:38.907 ======================================================== 00:19:38.907 Latency(us) 00:19:38.907 Device Information : IOPS MiB/s Average min max 00:19:38.907 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6404.00 25.02 155.81 118.63 326.00 00:19:38.907 ======================================================== 00:19:38.907 Total : 6404.00 25.02 155.81 118.63 326.00 00:19:38.907 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2770382 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.907 rmmod nvme_tcp 00:19:38.907 rmmod nvme_fabrics 00:19:38.907 rmmod nvme_keyring 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2770171 ']' 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2770171 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2770171 ']' 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2770171 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.907 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2770171 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2770171' 00:19:39.167 killing process with pid 2770171 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2770171 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2770171 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.167 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.706 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.706 00:19:41.706 real 0m10.179s 00:19:41.706 user 0m6.579s 00:19:41.706 sys 0m5.677s 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.706 ************************************ 00:19:41.706 END TEST nvmf_control_msg_list 00:19:41.706 ************************************ 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.706 ************************************ 00:19:41.706 START TEST nvmf_wait_for_buf 00:19:41.706 ************************************ 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:41.706 * Looking for test storage... 00:19:41.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.706 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.707 --rc genhtml_branch_coverage=1 00:19:41.707 --rc genhtml_function_coverage=1 00:19:41.707 --rc genhtml_legend=1 00:19:41.707 --rc geninfo_all_blocks=1 00:19:41.707 --rc geninfo_unexecuted_blocks=1 00:19:41.707 00:19:41.707 ' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.707 --rc genhtml_branch_coverage=1 00:19:41.707 --rc genhtml_function_coverage=1 00:19:41.707 --rc genhtml_legend=1 00:19:41.707 --rc geninfo_all_blocks=1 00:19:41.707 --rc geninfo_unexecuted_blocks=1 00:19:41.707 00:19:41.707 ' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.707 --rc genhtml_branch_coverage=1 00:19:41.707 --rc genhtml_function_coverage=1 00:19:41.707 --rc genhtml_legend=1 00:19:41.707 --rc geninfo_all_blocks=1 00:19:41.707 --rc geninfo_unexecuted_blocks=1 00:19:41.707 00:19:41.707 ' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.707 --rc genhtml_branch_coverage=1 00:19:41.707 --rc genhtml_function_coverage=1 00:19:41.707 --rc genhtml_legend=1 00:19:41.707 --rc geninfo_all_blocks=1 00:19:41.707 --rc geninfo_unexecuted_blocks=1 00:19:41.707 00:19:41.707 ' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.707 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.708 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.283 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.283 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.284 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.284 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.284 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:19:48.284 00:19:48.284 --- 10.0.0.2 ping statistics --- 00:19:48.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.284 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:19:48.284 00:19:48.284 --- 10.0.0.1 ping statistics --- 00:19:48.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.284 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2774090 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2774090 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2774090 ']' 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.284 [2024-11-20 16:12:48.302439] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:48.284 [2024-11-20 16:12:48.302487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.284 [2024-11-20 16:12:48.380944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.284 [2024-11-20 16:12:48.423431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.284 [2024-11-20 16:12:48.423467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.284 [2024-11-20 16:12:48.423474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.284 [2024-11-20 16:12:48.423481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.284 [2024-11-20 16:12:48.423486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.284 [2024-11-20 16:12:48.424036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.284 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 Malloc0 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 [2024-11-20 16:12:48.598584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.285 [2024-11-20 16:12:48.626771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.285 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:48.285 [2024-11-20 16:12:48.714021] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:49.665 Initializing NVMe Controllers 00:19:49.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:49.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:49.665 Initialization complete. Launching workers. 00:19:49.665 ======================================================== 00:19:49.665 Latency(us) 00:19:49.665 Device Information : IOPS MiB/s Average min max 00:19:49.665 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.88 7264.34 63842.94 00:19:49.665 ======================================================== 00:19:49.665 Total : 129.00 16.12 32238.88 7264.34 63842.94 00:19:49.665 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.665 rmmod nvme_tcp 00:19:49.665 rmmod nvme_fabrics 00:19:49.665 rmmod nvme_keyring 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2774090 ']' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2774090 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2774090 ']' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2774090 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774090 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774090' 00:19:49.665 killing process with pid 2774090 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2774090 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2774090 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.665 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:52.203 00:19:52.203 real 0m10.473s 00:19:52.203 user 0m3.955s 00:19:52.203 sys 0m4.970s 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.203 ************************************ 00:19:52.203 END TEST nvmf_wait_for_buf 00:19:52.203 ************************************ 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.203 16:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:57.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:57.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.481 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:57.482 Found net devices under 0000:86:00.0: cvl_0_0 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:57.482 Found net devices under 0000:86:00.1: cvl_0_1 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.482 ************************************ 00:19:57.482 START TEST nvmf_perf_adq 00:19:57.482 ************************************ 00:19:57.482 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:57.741 * Looking for test storage... 00:19:57.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.741 --rc genhtml_branch_coverage=1 00:19:57.741 --rc genhtml_function_coverage=1 00:19:57.741 --rc genhtml_legend=1 00:19:57.741 --rc geninfo_all_blocks=1 00:19:57.741 --rc geninfo_unexecuted_blocks=1 00:19:57.741 00:19:57.741 ' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.741 --rc genhtml_branch_coverage=1 00:19:57.741 --rc genhtml_function_coverage=1 00:19:57.741 --rc genhtml_legend=1 00:19:57.741 --rc geninfo_all_blocks=1 00:19:57.741 --rc geninfo_unexecuted_blocks=1 00:19:57.741 00:19:57.741 ' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.741 --rc genhtml_branch_coverage=1 00:19:57.741 --rc genhtml_function_coverage=1 00:19:57.741 --rc genhtml_legend=1 00:19:57.741 --rc geninfo_all_blocks=1 00:19:57.741 --rc geninfo_unexecuted_blocks=1 00:19:57.741 00:19:57.741 ' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.741 --rc genhtml_branch_coverage=1 00:19:57.741 --rc genhtml_function_coverage=1 00:19:57.741 --rc genhtml_legend=1 00:19:57.741 --rc geninfo_all_blocks=1 00:19:57.741 --rc geninfo_unexecuted_blocks=1 00:19:57.741 00:19:57.741 ' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:57.741 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:04.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:04.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.313 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:04.314 Found net devices under 0000:86:00.0: cvl_0_0 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:04.314 Found net devices under 0000:86:00.1: cvl_0_1 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:04.314 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:04.574 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:06.480 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.760 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.761 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.761 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:20:11.761 00:20:11.761 --- 10.0.0.2 ping statistics --- 00:20:11.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.761 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:20:11.761 00:20:11.761 --- 10.0.0.1 ping statistics --- 00:20:11.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.761 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.761 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2782307 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2782307 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2782307 ']' 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.762 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.762 [2024-11-20 16:13:12.457554] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:11.762 [2024-11-20 16:13:12.457600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.762 [2024-11-20 16:13:12.537340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.762 [2024-11-20 16:13:12.581665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.762 [2024-11-20 16:13:12.581701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.762 [2024-11-20 16:13:12.581710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.762 [2024-11-20 16:13:12.581716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.762 [2024-11-20 16:13:12.581721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.762 [2024-11-20 16:13:12.583261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.762 [2024-11-20 16:13:12.583372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.762 [2024-11-20 16:13:12.583479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.762 [2024-11-20 16:13:12.583480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 [2024-11-20 16:13:12.786755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 Malloc1 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.022 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.023 [2024-11-20 16:13:12.851783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.023 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2782499 00:20:12.282 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:12.282 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:14.188 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:14.188 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.188 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.188 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.189 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:14.189 "tick_rate": 2300000000, 00:20:14.189 "poll_groups": [ 00:20:14.189 { 00:20:14.189 "name": "nvmf_tgt_poll_group_000", 00:20:14.189 "admin_qpairs": 1, 00:20:14.189 "io_qpairs": 1, 00:20:14.189 "current_admin_qpairs": 1, 00:20:14.189 "current_io_qpairs": 1, 00:20:14.189 "pending_bdev_io": 0, 00:20:14.189 "completed_nvme_io": 19284, 00:20:14.189 "transports": [ 00:20:14.189 { 00:20:14.189 "trtype": "TCP" 00:20:14.189 } 00:20:14.189 ] 00:20:14.189 }, 00:20:14.189 { 00:20:14.189 "name": "nvmf_tgt_poll_group_001", 00:20:14.189 "admin_qpairs": 0, 00:20:14.189 "io_qpairs": 1, 00:20:14.189 "current_admin_qpairs": 0, 00:20:14.189 "current_io_qpairs": 1, 00:20:14.189 "pending_bdev_io": 0, 00:20:14.189 "completed_nvme_io": 19304, 00:20:14.189 "transports": [ 00:20:14.189 { 00:20:14.189 "trtype": "TCP" 00:20:14.189 } 00:20:14.189 ] 00:20:14.189 }, 00:20:14.189 { 00:20:14.189 "name": "nvmf_tgt_poll_group_002", 00:20:14.189 "admin_qpairs": 0, 00:20:14.189 "io_qpairs": 1, 00:20:14.189 "current_admin_qpairs": 0, 00:20:14.189 "current_io_qpairs": 1, 00:20:14.189 "pending_bdev_io": 0, 00:20:14.189 "completed_nvme_io": 19270, 00:20:14.189 "transports": [ 00:20:14.189 { 00:20:14.189 "trtype": "TCP" 00:20:14.189 } 00:20:14.189 ] 00:20:14.189 }, 00:20:14.189 { 00:20:14.189 "name": "nvmf_tgt_poll_group_003", 00:20:14.189 "admin_qpairs": 0, 00:20:14.189 "io_qpairs": 1, 00:20:14.189 "current_admin_qpairs": 0, 00:20:14.189 "current_io_qpairs": 1, 00:20:14.189 "pending_bdev_io": 0, 00:20:14.189 "completed_nvme_io": 18815, 00:20:14.189 "transports": [ 00:20:14.189 { 00:20:14.189 "trtype": "TCP" 00:20:14.189 } 00:20:14.189 ] 00:20:14.189 } 00:20:14.189 ] 00:20:14.189 }' 00:20:14.189 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:14.189 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:14.189 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:14.189 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:14.189 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2782499 00:20:22.313 Initializing NVMe Controllers 00:20:22.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:22.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:22.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:22.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:22.313 Initialization complete. Launching workers. 00:20:22.313 ======================================================== 00:20:22.313 Latency(us) 00:20:22.313 Device Information : IOPS MiB/s Average min max 00:20:22.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9992.50 39.03 6404.79 2586.16 11267.03 00:20:22.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10292.90 40.21 6231.04 2297.54 43861.41 00:20:22.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10199.70 39.84 6274.60 1802.19 10733.18 00:20:22.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10203.90 39.86 6272.49 2264.12 10720.36 00:20:22.313 ======================================================== 00:20:22.313 Total : 40688.99 158.94 6295.02 1802.19 43861.41 00:20:22.313 00:20:22.313 [2024-11-20 16:13:23.048785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cc500 is same with the state(6) to be set 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.313 rmmod nvme_tcp 00:20:22.313 rmmod nvme_fabrics 00:20:22.313 rmmod nvme_keyring 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2782307 ']' 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2782307 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2782307 ']' 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2782307 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.313 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2782307 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2782307' 00:20:22.572 killing process with pid 2782307 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2782307 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2782307 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.572 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.107 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:25.107 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:25.107 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:25.107 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:25.673 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:27.575 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.854 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:32.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:32.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:32.855 Found net devices under 0000:86:00.0: cvl_0_0 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:32.855 Found net devices under 0000:86:00.1: cvl_0_1 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:32.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:20:32.855 00:20:32.855 --- 10.0.0.2 ping statistics --- 00:20:32.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.855 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:20:32.855 00:20:32.855 --- 10.0.0.1 ping statistics --- 00:20:32.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.855 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:32.855 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:33.116 net.core.busy_poll = 1 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:33.116 net.core.busy_read = 1 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2786179 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2786179 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2786179 ']' 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.116 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.375 [2024-11-20 16:13:33.980372] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:33.375 [2024-11-20 16:13:33.980426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.375 [2024-11-20 16:13:34.062592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.375 [2024-11-20 16:13:34.105806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.375 [2024-11-20 16:13:34.105847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.375 [2024-11-20 16:13:34.105854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.375 [2024-11-20 16:13:34.105860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.375 [2024-11-20 16:13:34.105865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.375 [2024-11-20 16:13:34.107481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.375 [2024-11-20 16:13:34.107593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.375 [2024-11-20 16:13:34.107694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.375 [2024-11-20 16:13:34.107694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.375 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.375 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.376 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 [2024-11-20 16:13:34.310939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 Malloc1 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.679 [2024-11-20 16:13:34.378809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2786302 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:33.679 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:35.703 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:35.703 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.703 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.703 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.703 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:35.703 "tick_rate": 2300000000, 00:20:35.703 "poll_groups": [ 00:20:35.703 { 00:20:35.703 "name": "nvmf_tgt_poll_group_000", 00:20:35.703 "admin_qpairs": 1, 00:20:35.703 "io_qpairs": 2, 00:20:35.703 "current_admin_qpairs": 1, 00:20:35.703 "current_io_qpairs": 2, 00:20:35.703 "pending_bdev_io": 0, 00:20:35.703 "completed_nvme_io": 28357, 00:20:35.703 "transports": [ 00:20:35.703 { 00:20:35.703 "trtype": "TCP" 00:20:35.703 } 00:20:35.703 ] 00:20:35.703 }, 00:20:35.703 { 00:20:35.703 "name": "nvmf_tgt_poll_group_001", 00:20:35.703 "admin_qpairs": 0, 00:20:35.703 "io_qpairs": 2, 00:20:35.703 "current_admin_qpairs": 0, 00:20:35.703 "current_io_qpairs": 2, 00:20:35.703 "pending_bdev_io": 0, 00:20:35.703 "completed_nvme_io": 27629, 00:20:35.703 "transports": [ 00:20:35.703 { 00:20:35.703 "trtype": "TCP" 00:20:35.703 } 00:20:35.703 ] 00:20:35.703 }, 00:20:35.703 { 00:20:35.703 "name": "nvmf_tgt_poll_group_002", 00:20:35.703 "admin_qpairs": 0, 00:20:35.703 "io_qpairs": 0, 00:20:35.703 "current_admin_qpairs": 0, 00:20:35.703 "current_io_qpairs": 0, 00:20:35.703 "pending_bdev_io": 0, 00:20:35.703 "completed_nvme_io": 0, 00:20:35.703 "transports": [ 00:20:35.703 { 00:20:35.703 "trtype": "TCP" 00:20:35.703 } 00:20:35.703 ] 00:20:35.703 }, 00:20:35.703 { 00:20:35.703 "name": "nvmf_tgt_poll_group_003", 00:20:35.703 "admin_qpairs": 0, 00:20:35.703 "io_qpairs": 0, 00:20:35.703 "current_admin_qpairs": 0, 00:20:35.703 "current_io_qpairs": 0, 00:20:35.703 "pending_bdev_io": 0, 00:20:35.703 "completed_nvme_io": 0, 00:20:35.703 "transports": [ 00:20:35.703 { 00:20:35.704 "trtype": "TCP" 00:20:35.704 } 00:20:35.704 ] 00:20:35.704 } 00:20:35.704 ] 00:20:35.704 }' 00:20:35.704 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:35.704 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:35.704 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:35.704 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:35.704 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2786302 00:20:43.896 Initializing NVMe Controllers 00:20:43.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:43.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:43.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:43.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:43.896 Initialization complete. Launching workers. 00:20:43.896 ======================================================== 00:20:43.896 Latency(us) 00:20:43.896 Device Information : IOPS MiB/s Average min max 00:20:43.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8024.80 31.35 7999.42 1604.47 52639.69 00:20:43.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7093.80 27.71 9021.53 1532.76 52553.37 00:20:43.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7543.90 29.47 8483.34 683.98 52660.74 00:20:43.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6718.00 26.24 9527.12 1605.60 52719.92 00:20:43.896 ======================================================== 00:20:43.896 Total : 29380.49 114.77 8719.77 683.98 52719.92 00:20:43.896 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.896 rmmod nvme_tcp 00:20:43.896 rmmod nvme_fabrics 00:20:43.896 rmmod nvme_keyring 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2786179 ']' 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2786179 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2786179 ']' 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2786179 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2786179 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2786179' 00:20:43.896 killing process with pid 2786179 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2786179 00:20:43.896 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2786179 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.155 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:46.694 00:20:46.694 real 0m48.703s 00:20:46.694 user 2m43.995s 00:20:46.694 sys 0m10.377s 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.694 ************************************ 00:20:46.694 END TEST nvmf_perf_adq 00:20:46.694 ************************************ 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.694 16:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.694 ************************************ 00:20:46.694 START TEST nvmf_shutdown 00:20:46.694 ************************************ 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:46.694 * Looking for test storage... 00:20:46.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.694 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.695 --rc genhtml_branch_coverage=1 00:20:46.695 --rc genhtml_function_coverage=1 00:20:46.695 --rc genhtml_legend=1 00:20:46.695 --rc geninfo_all_blocks=1 00:20:46.695 --rc geninfo_unexecuted_blocks=1 00:20:46.695 00:20:46.695 ' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.695 --rc genhtml_branch_coverage=1 00:20:46.695 --rc genhtml_function_coverage=1 00:20:46.695 --rc genhtml_legend=1 00:20:46.695 --rc geninfo_all_blocks=1 00:20:46.695 --rc geninfo_unexecuted_blocks=1 00:20:46.695 00:20:46.695 ' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.695 --rc genhtml_branch_coverage=1 00:20:46.695 --rc genhtml_function_coverage=1 00:20:46.695 --rc genhtml_legend=1 00:20:46.695 --rc geninfo_all_blocks=1 00:20:46.695 --rc geninfo_unexecuted_blocks=1 00:20:46.695 00:20:46.695 ' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:46.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.695 --rc genhtml_branch_coverage=1 00:20:46.695 --rc genhtml_function_coverage=1 00:20:46.695 --rc genhtml_legend=1 00:20:46.695 --rc geninfo_all_blocks=1 00:20:46.695 --rc geninfo_unexecuted_blocks=1 00:20:46.695 00:20:46.695 ' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:46.695 ************************************ 00:20:46.695 START TEST nvmf_shutdown_tc1 00:20:46.695 ************************************ 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.695 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.266 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.266 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:53.266 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:53.266 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:53.266 Found net devices under 0000:86:00.0: cvl_0_0 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:53.266 Found net devices under 0000:86:00.1: cvl_0_1 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:53.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:20:53.266 00:20:53.266 --- 10.0.0.2 ping statistics --- 00:20:53.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.266 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:20:53.266 00:20:53.266 --- 10.0.0.1 ping statistics --- 00:20:53.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.266 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2791535 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2791535 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2791535 ']' 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.266 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.266 [2024-11-20 16:13:53.361134] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:53.266 [2024-11-20 16:13:53.361181] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.266 [2024-11-20 16:13:53.441837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.266 [2024-11-20 16:13:53.481984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.267 [2024-11-20 16:13:53.482023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.267 [2024-11-20 16:13:53.482029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.267 [2024-11-20 16:13:53.482035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.267 [2024-11-20 16:13:53.482040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.267 [2024-11-20 16:13:53.483620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.267 [2024-11-20 16:13:53.483708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.267 [2024-11-20 16:13:53.483834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.267 [2024-11-20 16:13:53.483835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.525 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.525 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:53.525 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.525 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.525 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.525 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.526 [2024-11-20 16:13:54.248381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.526 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.526 Malloc1 00:20:53.526 [2024-11-20 16:13:54.352769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.784 Malloc2 00:20:53.784 Malloc3 00:20:53.784 Malloc4 00:20:53.784 Malloc5 00:20:53.784 Malloc6 00:20:53.784 Malloc7 00:20:54.044 Malloc8 00:20:54.044 Malloc9 00:20:54.044 Malloc10 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2791814 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2791814 /var/tmp/bdevperf.sock 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2791814 ']' 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.044 { 00:20:54.044 "params": { 00:20:54.044 "name": "Nvme$subsystem", 00:20:54.044 "trtype": "$TEST_TRANSPORT", 00:20:54.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.044 "adrfam": "ipv4", 00:20:54.044 "trsvcid": "$NVMF_PORT", 00:20:54.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.044 "hdgst": ${hdgst:-false}, 00:20:54.044 "ddgst": ${ddgst:-false} 00:20:54.044 }, 00:20:54.044 "method": "bdev_nvme_attach_controller" 00:20:54.044 } 00:20:54.044 EOF 00:20:54.044 )") 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.044 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.044 { 00:20:54.044 "params": { 00:20:54.044 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 [2024-11-20 16:13:54.835955] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:54.045 [2024-11-20 16:13:54.836003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.045 { 00:20:54.045 "params": { 00:20:54.045 "name": "Nvme$subsystem", 00:20:54.045 "trtype": "$TEST_TRANSPORT", 00:20:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.045 "adrfam": "ipv4", 00:20:54.045 "trsvcid": "$NVMF_PORT", 00:20:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.045 "hdgst": ${hdgst:-false}, 00:20:54.045 "ddgst": ${ddgst:-false} 00:20:54.045 }, 00:20:54.045 "method": "bdev_nvme_attach_controller" 00:20:54.045 } 00:20:54.045 EOF 00:20:54.045 )") 00:20:54.045 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.046 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.046 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.046 { 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme$subsystem", 00:20:54.046 "trtype": "$TEST_TRANSPORT", 00:20:54.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "$NVMF_PORT", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.046 "hdgst": ${hdgst:-false}, 00:20:54.046 "ddgst": ${ddgst:-false} 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 } 00:20:54.046 EOF 00:20:54.046 )") 00:20:54.046 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.046 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:54.046 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:54.046 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme1", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme2", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme3", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme4", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme5", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme6", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme7", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme8", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme9", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 },{ 00:20:54.046 "params": { 00:20:54.046 "name": "Nvme10", 00:20:54.046 "trtype": "tcp", 00:20:54.046 "traddr": "10.0.0.2", 00:20:54.046 "adrfam": "ipv4", 00:20:54.046 "trsvcid": "4420", 00:20:54.046 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:54.046 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:54.046 "hdgst": false, 00:20:54.046 "ddgst": false 00:20:54.046 }, 00:20:54.046 "method": "bdev_nvme_attach_controller" 00:20:54.046 }' 00:20:54.305 [2024-11-20 16:13:54.913004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.305 [2024-11-20 16:13:54.954149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2791814 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:56.212 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:57.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2791814 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2791535 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.148 { 00:20:57.148 "params": { 00:20:57.148 "name": "Nvme$subsystem", 00:20:57.148 "trtype": "$TEST_TRANSPORT", 00:20:57.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.148 "adrfam": "ipv4", 00:20:57.148 "trsvcid": "$NVMF_PORT", 00:20:57.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.148 "hdgst": ${hdgst:-false}, 00:20:57.148 "ddgst": ${ddgst:-false} 00:20:57.148 }, 00:20:57.148 "method": "bdev_nvme_attach_controller" 00:20:57.148 } 00:20:57.148 EOF 00:20:57.148 )") 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.148 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.148 { 00:20:57.148 "params": { 00:20:57.148 "name": "Nvme$subsystem", 00:20:57.148 "trtype": "$TEST_TRANSPORT", 00:20:57.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.148 "adrfam": "ipv4", 00:20:57.148 "trsvcid": "$NVMF_PORT", 00:20:57.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.148 "hdgst": ${hdgst:-false}, 00:20:57.148 "ddgst": ${ddgst:-false} 00:20:57.148 }, 00:20:57.148 "method": "bdev_nvme_attach_controller" 00:20:57.148 } 00:20:57.148 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 [2024-11-20 16:13:57.771390] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:57.149 [2024-11-20 16:13:57.771439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792306 ] 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.149 { 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme$subsystem", 00:20:57.149 "trtype": "$TEST_TRANSPORT", 00:20:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "$NVMF_PORT", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.149 "hdgst": ${hdgst:-false}, 00:20:57.149 "ddgst": ${ddgst:-false} 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 } 00:20:57.149 EOF 00:20:57.149 )") 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:57.149 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme1", 00:20:57.149 "trtype": "tcp", 00:20:57.149 "traddr": "10.0.0.2", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "4420", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.149 "hdgst": false, 00:20:57.149 "ddgst": false 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 },{ 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme2", 00:20:57.149 "trtype": "tcp", 00:20:57.149 "traddr": "10.0.0.2", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "4420", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:57.149 "hdgst": false, 00:20:57.149 "ddgst": false 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 },{ 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme3", 00:20:57.149 "trtype": "tcp", 00:20:57.149 "traddr": "10.0.0.2", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "4420", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:57.149 "hdgst": false, 00:20:57.149 "ddgst": false 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.149 },{ 00:20:57.149 "params": { 00:20:57.149 "name": "Nvme4", 00:20:57.149 "trtype": "tcp", 00:20:57.149 "traddr": "10.0.0.2", 00:20:57.149 "adrfam": "ipv4", 00:20:57.149 "trsvcid": "4420", 00:20:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:57.149 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:57.149 "hdgst": false, 00:20:57.149 "ddgst": false 00:20:57.149 }, 00:20:57.149 "method": "bdev_nvme_attach_controller" 00:20:57.150 },{ 00:20:57.150 "params": { 00:20:57.150 "name": "Nvme5", 00:20:57.150 "trtype": "tcp", 00:20:57.150 "traddr": "10.0.0.2", 00:20:57.150 "adrfam": "ipv4", 00:20:57.150 "trsvcid": "4420", 00:20:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:57.150 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:57.150 "hdgst": false, 00:20:57.150 "ddgst": false 00:20:57.150 }, 00:20:57.150 "method": "bdev_nvme_attach_controller" 00:20:57.150 },{ 00:20:57.150 "params": { 00:20:57.150 "name": "Nvme6", 00:20:57.150 "trtype": "tcp", 00:20:57.150 "traddr": "10.0.0.2", 00:20:57.150 "adrfam": "ipv4", 00:20:57.150 "trsvcid": "4420", 00:20:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:57.150 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:57.150 "hdgst": false, 00:20:57.150 "ddgst": false 00:20:57.150 }, 00:20:57.150 "method": "bdev_nvme_attach_controller" 00:20:57.150 },{ 00:20:57.150 "params": { 00:20:57.150 "name": "Nvme7", 00:20:57.150 "trtype": "tcp", 00:20:57.150 "traddr": "10.0.0.2", 00:20:57.150 "adrfam": "ipv4", 00:20:57.150 "trsvcid": "4420", 00:20:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:57.150 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:57.150 "hdgst": false, 00:20:57.150 "ddgst": false 00:20:57.150 }, 00:20:57.150 "method": "bdev_nvme_attach_controller" 00:20:57.150 },{ 00:20:57.150 "params": { 00:20:57.150 "name": "Nvme8", 00:20:57.150 "trtype": "tcp", 00:20:57.150 "traddr": "10.0.0.2", 00:20:57.150 "adrfam": "ipv4", 00:20:57.150 "trsvcid": "4420", 00:20:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:57.150 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:57.150 "hdgst": false, 00:20:57.150 "ddgst": false 00:20:57.150 }, 00:20:57.150 "method": "bdev_nvme_attach_controller" 00:20:57.150 },{ 00:20:57.150 "params": { 00:20:57.150 "name": "Nvme9", 00:20:57.150 "trtype": "tcp", 00:20:57.150 "traddr": "10.0.0.2", 00:20:57.150 "adrfam": "ipv4", 00:20:57.150 "trsvcid": "4420", 00:20:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:57.150 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:57.150 "hdgst": false, 00:20:57.150 "ddgst": false 00:20:57.150 }, 00:20:57.150 "method": "bdev_nvme_attach_controller" 00:20:57.150 },{ 00:20:57.150 "params": { 00:20:57.150 "name": "Nvme10", 00:20:57.150 "trtype": "tcp", 00:20:57.150 "traddr": "10.0.0.2", 00:20:57.150 "adrfam": "ipv4", 00:20:57.150 "trsvcid": "4420", 00:20:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:57.150 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:57.150 "hdgst": false, 00:20:57.150 "ddgst": false 00:20:57.150 }, 00:20:57.150 "method": "bdev_nvme_attach_controller" 00:20:57.150 }' 00:20:57.150 [2024-11-20 16:13:57.850052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.150 [2024-11-20 16:13:57.891368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.531 Running I/O for 1 seconds... 00:20:59.727 2202.00 IOPS, 137.62 MiB/s 00:20:59.727 Latency(us) 00:20:59.727 [2024-11-20T15:14:00.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme1n1 : 1.14 227.06 14.19 0.00 0.00 278737.83 2949.12 238892.97 00:20:59.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme2n1 : 1.15 277.35 17.33 0.00 0.00 225529.86 20287.67 224304.08 00:20:59.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme3n1 : 1.15 279.44 17.46 0.00 0.00 220633.62 13278.16 222480.47 00:20:59.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme4n1 : 1.12 316.58 19.79 0.00 0.00 186166.31 12252.38 221568.67 00:20:59.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme5n1 : 1.11 230.64 14.41 0.00 0.00 258143.28 17096.35 237069.36 00:20:59.727 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme6n1 : 1.16 275.42 17.21 0.00 0.00 213590.91 9175.04 231598.53 00:20:59.727 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme7n1 : 1.15 277.17 17.32 0.00 0.00 208978.99 18464.06 214274.23 00:20:59.727 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme8n1 : 1.15 277.87 17.37 0.00 0.00 206042.33 17438.27 211538.81 00:20:59.727 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme9n1 : 1.16 280.17 17.51 0.00 0.00 201227.54 1909.09 217009.64 00:20:59.727 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.727 Verification LBA range: start 0x0 length 0x400 00:20:59.727 Nvme10n1 : 1.17 274.24 17.14 0.00 0.00 202722.70 11568.53 231598.53 00:20:59.727 [2024-11-20T15:14:00.564Z] =================================================================================================================== 00:20:59.727 [2024-11-20T15:14:00.564Z] Total : 2715.95 169.75 0.00 0.00 217858.31 1909.09 238892.97 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.986 rmmod nvme_tcp 00:20:59.986 rmmod nvme_fabrics 00:20:59.986 rmmod nvme_keyring 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2791535 ']' 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2791535 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2791535 ']' 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2791535 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2791535 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:59.986 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:59.987 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2791535' 00:20:59.987 killing process with pid 2791535 00:20:59.987 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2791535 00:20:59.987 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2791535 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.555 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.462 00:21:02.462 real 0m15.878s 00:21:02.462 user 0m36.226s 00:21:02.462 sys 0m6.001s 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.462 ************************************ 00:21:02.462 END TEST nvmf_shutdown_tc1 00:21:02.462 ************************************ 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:02.462 ************************************ 00:21:02.462 START TEST nvmf_shutdown_tc2 00:21:02.462 ************************************ 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.462 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:02.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:02.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:02.463 Found net devices under 0000:86:00.0: cvl_0_0 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:02.463 Found net devices under 0000:86:00.1: cvl_0_1 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.463 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:21:02.723 00:21:02.723 --- 10.0.0.2 ping statistics --- 00:21:02.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.723 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:02.723 00:21:02.723 --- 10.0.0.1 ping statistics --- 00:21:02.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.723 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2793330 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2793330 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2793330 ']' 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.723 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.724 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.724 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.982 [2024-11-20 16:14:03.597397] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:02.982 [2024-11-20 16:14:03.597444] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.982 [2024-11-20 16:14:03.677133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.982 [2024-11-20 16:14:03.719306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.982 [2024-11-20 16:14:03.719343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.982 [2024-11-20 16:14:03.719353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.982 [2024-11-20 16:14:03.719359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.982 [2024-11-20 16:14:03.719364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.982 [2024-11-20 16:14:03.721008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.982 [2024-11-20 16:14:03.721117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.982 [2024-11-20 16:14:03.721225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.982 [2024-11-20 16:14:03.721226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.920 [2024-11-20 16:14:04.473001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.920 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.920 Malloc1 00:21:03.920 [2024-11-20 16:14:04.581565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.920 Malloc2 00:21:03.920 Malloc3 00:21:03.920 Malloc4 00:21:03.920 Malloc5 00:21:04.179 Malloc6 00:21:04.179 Malloc7 00:21:04.179 Malloc8 00:21:04.179 Malloc9 00:21:04.179 Malloc10 00:21:04.179 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.179 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:04.179 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.179 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.179 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2793603 00:21:04.179 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2793603 /var/tmp/bdevperf.sock 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2793603 ']' 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.439 { 00:21:04.439 "params": { 00:21:04.439 "name": "Nvme$subsystem", 00:21:04.439 "trtype": "$TEST_TRANSPORT", 00:21:04.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.439 "adrfam": "ipv4", 00:21:04.439 "trsvcid": "$NVMF_PORT", 00:21:04.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.439 "hdgst": ${hdgst:-false}, 00:21:04.439 "ddgst": ${ddgst:-false} 00:21:04.439 }, 00:21:04.439 "method": "bdev_nvme_attach_controller" 00:21:04.439 } 00:21:04.439 EOF 00:21:04.439 )") 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.439 { 00:21:04.439 "params": { 00:21:04.439 "name": "Nvme$subsystem", 00:21:04.439 "trtype": "$TEST_TRANSPORT", 00:21:04.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.439 "adrfam": "ipv4", 00:21:04.439 "trsvcid": "$NVMF_PORT", 00:21:04.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.439 "hdgst": ${hdgst:-false}, 00:21:04.439 "ddgst": ${ddgst:-false} 00:21:04.439 }, 00:21:04.439 "method": "bdev_nvme_attach_controller" 00:21:04.439 } 00:21:04.439 EOF 00:21:04.439 )") 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.439 { 00:21:04.439 "params": { 00:21:04.439 "name": "Nvme$subsystem", 00:21:04.439 "trtype": "$TEST_TRANSPORT", 00:21:04.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.439 "adrfam": "ipv4", 00:21:04.439 "trsvcid": "$NVMF_PORT", 00:21:04.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.439 "hdgst": ${hdgst:-false}, 00:21:04.439 "ddgst": ${ddgst:-false} 00:21:04.439 }, 00:21:04.439 "method": "bdev_nvme_attach_controller" 00:21:04.439 } 00:21:04.439 EOF 00:21:04.439 )") 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.439 { 00:21:04.439 "params": { 00:21:04.439 "name": "Nvme$subsystem", 00:21:04.439 "trtype": "$TEST_TRANSPORT", 00:21:04.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.439 "adrfam": "ipv4", 00:21:04.439 "trsvcid": "$NVMF_PORT", 00:21:04.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.439 "hdgst": ${hdgst:-false}, 00:21:04.439 "ddgst": ${ddgst:-false} 00:21:04.439 }, 00:21:04.439 "method": "bdev_nvme_attach_controller" 00:21:04.439 } 00:21:04.439 EOF 00:21:04.439 )") 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.439 { 00:21:04.439 "params": { 00:21:04.439 "name": "Nvme$subsystem", 00:21:04.439 "trtype": "$TEST_TRANSPORT", 00:21:04.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.439 "adrfam": "ipv4", 00:21:04.439 "trsvcid": "$NVMF_PORT", 00:21:04.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.439 "hdgst": ${hdgst:-false}, 00:21:04.439 "ddgst": ${ddgst:-false} 00:21:04.439 }, 00:21:04.439 "method": "bdev_nvme_attach_controller" 00:21:04.439 } 00:21:04.439 EOF 00:21:04.439 )") 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.439 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.439 { 00:21:04.439 "params": { 00:21:04.439 "name": "Nvme$subsystem", 00:21:04.439 "trtype": "$TEST_TRANSPORT", 00:21:04.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "$NVMF_PORT", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.440 "hdgst": ${hdgst:-false}, 00:21:04.440 "ddgst": ${ddgst:-false} 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 } 00:21:04.440 EOF 00:21:04.440 )") 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.440 { 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme$subsystem", 00:21:04.440 "trtype": "$TEST_TRANSPORT", 00:21:04.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "$NVMF_PORT", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.440 "hdgst": ${hdgst:-false}, 00:21:04.440 "ddgst": ${ddgst:-false} 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 } 00:21:04.440 EOF 00:21:04.440 )") 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.440 [2024-11-20 16:14:05.059070] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:04.440 [2024-11-20 16:14:05.059115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793603 ] 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.440 { 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme$subsystem", 00:21:04.440 "trtype": "$TEST_TRANSPORT", 00:21:04.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "$NVMF_PORT", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.440 "hdgst": ${hdgst:-false}, 00:21:04.440 "ddgst": ${ddgst:-false} 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 } 00:21:04.440 EOF 00:21:04.440 )") 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.440 { 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme$subsystem", 00:21:04.440 "trtype": "$TEST_TRANSPORT", 00:21:04.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "$NVMF_PORT", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.440 "hdgst": ${hdgst:-false}, 00:21:04.440 "ddgst": ${ddgst:-false} 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 } 00:21:04.440 EOF 00:21:04.440 )") 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.440 { 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme$subsystem", 00:21:04.440 "trtype": "$TEST_TRANSPORT", 00:21:04.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "$NVMF_PORT", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.440 "hdgst": ${hdgst:-false}, 00:21:04.440 "ddgst": ${ddgst:-false} 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 } 00:21:04.440 EOF 00:21:04.440 )") 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:04.440 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme1", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme2", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme3", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme4", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme5", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme6", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme7", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme8", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme9", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 },{ 00:21:04.440 "params": { 00:21:04.440 "name": "Nvme10", 00:21:04.440 "trtype": "tcp", 00:21:04.440 "traddr": "10.0.0.2", 00:21:04.440 "adrfam": "ipv4", 00:21:04.440 "trsvcid": "4420", 00:21:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:04.440 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:04.440 "hdgst": false, 00:21:04.440 "ddgst": false 00:21:04.440 }, 00:21:04.440 "method": "bdev_nvme_attach_controller" 00:21:04.440 }' 00:21:04.440 [2024-11-20 16:14:05.135908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.440 [2024-11-20 16:14:05.177710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.819 Running I/O for 10 seconds... 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.388 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.388 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:06.388 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:06.388 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2793603 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2793603 ']' 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2793603 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.647 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793603 00:21:06.648 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.648 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.648 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793603' 00:21:06.648 killing process with pid 2793603 00:21:06.648 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2793603 00:21:06.648 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2793603 00:21:06.648 Received shutdown signal, test time was about 0.791939 seconds 00:21:06.648 00:21:06.648 Latency(us) 00:21:06.648 [2024-11-20T15:14:07.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.648 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme1n1 : 0.76 252.74 15.80 0.00 0.00 249903.79 25872.47 203332.56 00:21:06.648 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme2n1 : 0.79 323.53 20.22 0.00 0.00 190518.65 14303.94 209715.20 00:21:06.648 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme3n1 : 0.79 324.47 20.28 0.00 0.00 186269.38 20857.54 220656.86 00:21:06.648 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme4n1 : 0.79 331.76 20.74 0.00 0.00 178076.37 8263.23 219745.06 00:21:06.648 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme5n1 : 0.77 247.81 15.49 0.00 0.00 233772.97 18236.10 232510.33 00:21:06.648 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme6n1 : 0.77 248.63 15.54 0.00 0.00 227520.63 19603.81 206067.98 00:21:06.648 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme7n1 : 0.76 253.53 15.85 0.00 0.00 217019.14 25644.52 217009.64 00:21:06.648 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme8n1 : 0.77 250.95 15.68 0.00 0.00 214636.34 15614.66 218833.25 00:21:06.648 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme9n1 : 0.78 245.11 15.32 0.00 0.00 215370.05 22111.28 227951.30 00:21:06.648 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.648 Verification LBA range: start 0x0 length 0x400 00:21:06.648 Nvme10n1 : 0.78 246.05 15.38 0.00 0.00 209215.96 17780.20 242540.19 00:21:06.648 [2024-11-20T15:14:07.485Z] =================================================================================================================== 00:21:06.648 [2024-11-20T15:14:07.485Z] Total : 2724.59 170.29 0.00 0.00 209675.92 8263.23 242540.19 00:21:06.906 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2793330 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.846 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.846 rmmod nvme_tcp 00:21:07.846 rmmod nvme_fabrics 00:21:07.846 rmmod nvme_keyring 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2793330 ']' 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2793330 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2793330 ']' 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2793330 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793330 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793330' 00:21:08.106 killing process with pid 2793330 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2793330 00:21:08.106 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2793330 00:21:08.365 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.365 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.365 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.365 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:08.365 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.366 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.912 00:21:10.912 real 0m7.941s 00:21:10.912 user 0m24.050s 00:21:10.912 sys 0m1.384s 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.912 ************************************ 00:21:10.912 END TEST nvmf_shutdown_tc2 00:21:10.912 ************************************ 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:10.912 ************************************ 00:21:10.912 START TEST nvmf_shutdown_tc3 00:21:10.912 ************************************ 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.912 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.913 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.913 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:21:10.913 00:21:10.913 --- 10.0.0.2 ping statistics --- 00:21:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.913 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:10.913 00:21:10.913 --- 10.0.0.1 ping statistics --- 00:21:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.913 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2794862 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2794862 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:10.913 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2794862 ']' 00:21:10.914 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.914 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.914 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.914 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.914 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.914 [2024-11-20 16:14:11.610857] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:10.914 [2024-11-20 16:14:11.610902] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.914 [2024-11-20 16:14:11.689487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.914 [2024-11-20 16:14:11.731989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.914 [2024-11-20 16:14:11.732027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.914 [2024-11-20 16:14:11.732034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.914 [2024-11-20 16:14:11.732040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.914 [2024-11-20 16:14:11.732046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.914 [2024-11-20 16:14:11.733657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.914 [2024-11-20 16:14:11.733762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.914 [2024-11-20 16:14:11.733870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.914 [2024-11-20 16:14:11.733871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.173 [2024-11-20 16:14:11.872570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:11.173 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:11.174 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:11.174 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.174 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.174 Malloc1 00:21:11.174 [2024-11-20 16:14:11.995453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.433 Malloc2 00:21:11.433 Malloc3 00:21:11.433 Malloc4 00:21:11.433 Malloc5 00:21:11.433 Malloc6 00:21:11.433 Malloc7 00:21:11.694 Malloc8 00:21:11.694 Malloc9 00:21:11.694 Malloc10 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2795019 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2795019 /var/tmp/bdevperf.sock 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2795019 ']' 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.694 { 00:21:11.694 "params": { 00:21:11.694 "name": "Nvme$subsystem", 00:21:11.694 "trtype": "$TEST_TRANSPORT", 00:21:11.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.694 "adrfam": "ipv4", 00:21:11.694 "trsvcid": "$NVMF_PORT", 00:21:11.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.694 "hdgst": ${hdgst:-false}, 00:21:11.694 "ddgst": ${ddgst:-false} 00:21:11.694 }, 00:21:11.694 "method": "bdev_nvme_attach_controller" 00:21:11.694 } 00:21:11.694 EOF 00:21:11.694 )") 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.694 { 00:21:11.694 "params": { 00:21:11.694 "name": "Nvme$subsystem", 00:21:11.694 "trtype": "$TEST_TRANSPORT", 00:21:11.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.694 "adrfam": "ipv4", 00:21:11.694 "trsvcid": "$NVMF_PORT", 00:21:11.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.694 "hdgst": ${hdgst:-false}, 00:21:11.694 "ddgst": ${ddgst:-false} 00:21:11.694 }, 00:21:11.694 "method": "bdev_nvme_attach_controller" 00:21:11.694 } 00:21:11.694 EOF 00:21:11.694 )") 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.694 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.694 { 00:21:11.694 "params": { 00:21:11.694 "name": "Nvme$subsystem", 00:21:11.694 "trtype": "$TEST_TRANSPORT", 00:21:11.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.694 "adrfam": "ipv4", 00:21:11.694 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 [2024-11-20 16:14:12.472026] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:11.695 [2024-11-20 16:14:12.472074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795019 ] 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.695 { 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme$subsystem", 00:21:11.695 "trtype": "$TEST_TRANSPORT", 00:21:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "$NVMF_PORT", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.695 "hdgst": ${hdgst:-false}, 00:21:11.695 "ddgst": ${ddgst:-false} 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 } 00:21:11.695 EOF 00:21:11.695 )") 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:11.695 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme1", 00:21:11.695 "trtype": "tcp", 00:21:11.695 "traddr": "10.0.0.2", 00:21:11.695 "adrfam": "ipv4", 00:21:11.695 "trsvcid": "4420", 00:21:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.695 "hdgst": false, 00:21:11.695 "ddgst": false 00:21:11.695 }, 00:21:11.695 "method": "bdev_nvme_attach_controller" 00:21:11.695 },{ 00:21:11.695 "params": { 00:21:11.695 "name": "Nvme2", 00:21:11.695 "trtype": "tcp", 00:21:11.695 "traddr": "10.0.0.2", 00:21:11.695 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme3", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme4", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme5", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme6", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme7", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme8", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme9", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 },{ 00:21:11.696 "params": { 00:21:11.696 "name": "Nvme10", 00:21:11.696 "trtype": "tcp", 00:21:11.696 "traddr": "10.0.0.2", 00:21:11.696 "adrfam": "ipv4", 00:21:11.696 "trsvcid": "4420", 00:21:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:11.696 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:11.696 "hdgst": false, 00:21:11.696 "ddgst": false 00:21:11.696 }, 00:21:11.696 "method": "bdev_nvme_attach_controller" 00:21:11.696 }' 00:21:11.956 [2024-11-20 16:14:12.550180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.956 [2024-11-20 16:14:12.591439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.335 Running I/O for 10 seconds... 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=16 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 16 -ge 100 ']' 00:21:13.594 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:13.853 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:13.853 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:13.853 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:13.854 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.854 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:13.854 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=137 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 137 -ge 100 ']' 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2794862 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2794862 ']' 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2794862 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2794862 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2794862' 00:21:14.132 killing process with pid 2794862 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2794862 00:21:14.132 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2794862 00:21:14.132 [2024-11-20 16:14:14.760453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.132 [2024-11-20 16:14:14.760663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.760923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9850 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.133 [2024-11-20 16:14:14.762396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.762517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc400 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.763999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.764064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f9d20 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.765283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.765311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.134 [2024-11-20 16:14:14.765320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.765732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa1f0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.135 [2024-11-20 16:14:14.766984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.766991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.766997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fa6e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9861e0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd5830 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x985fe0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912c0 is same with the state(6) to be set 00:21:14.136 [2024-11-20 16:14:14.767858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.136 [2024-11-20 16:14:14.767866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.136 [2024-11-20 16:14:14.767873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.137 [2024-11-20 16:14:14.767880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.137 [2024-11-20 16:14:14.767888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.137 [2024-11-20 16:14:14.767894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.137 [2024-11-20 16:14:14.767903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.137 [2024-11-20 16:14:14.767910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.137 [2024-11-20 16:14:14.767917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9921b0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.768997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.769124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fabb0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.770331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb0a0 is same with the state(6) to be set 00:21:14.137 [2024-11-20 16:14:14.770588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.137 [2024-11-20 16:14:14.770612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.137 [2024-11-20 16:14:14.770628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.770990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.770996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.138 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.138 [2024-11-20 16:14:14.771130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.138 [2024-11-20 16:14:14.771139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.138 [2024-11-20 16:14:14.771148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.138 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.138 [2024-11-20 16:14:14.771168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.138 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.138 [2024-11-20 16:14:14.771185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.138 [2024-11-20 16:14:14.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.138 [2024-11-20 16:14:14.771192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 16:14:14.771318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 he state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:12[2024-11-20 16:14:14.771508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 he state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.139 [2024-11-20 16:14:14.771525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.139 [2024-11-20 16:14:14.771534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.139 [2024-11-20 16:14:14.771536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:12[2024-11-20 16:14:14.771563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 he state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.140 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with t[2024-11-20 16:14:14.771612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:14.140 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fb570 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.771670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.140 [2024-11-20 16:14:14.771676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.140 [2024-11-20 16:14:14.772594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.140 [2024-11-20 16:14:14.772960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.772966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.772973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.772981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.772987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.772994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.773000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.773006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fba40 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.773656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fbf10 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.773677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fbf10 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.773684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fbf10 is same with the state(6) to be set 00:21:14.141 [2024-11-20 16:14:14.779514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.141 [2024-11-20 16:14:14.779980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.141 [2024-11-20 16:14:14.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.779995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.142 [2024-11-20 16:14:14.780527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780632] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.142 [2024-11-20 16:14:14.780658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:14.142 [2024-11-20 16:14:14.780706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a6610 (9): Bad file descriptor 00:21:14.142 [2024-11-20 16:14:14.780730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9861e0 (9): Bad file descriptor 00:21:14.142 [2024-11-20 16:14:14.780746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd5830 (9): Bad file descriptor 00:21:14.142 [2024-11-20 16:14:14.780763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x985fe0 (9): Bad file descriptor 00:21:14.142 [2024-11-20 16:14:14.780791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.142 [2024-11-20 16:14:14.780801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.142 [2024-11-20 16:14:14.780809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf03c0 is same with the state(6) to be set 00:21:14.143 [2024-11-20 16:14:14.780881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f20 is same with the state(6) to be set 00:21:14.143 [2024-11-20 16:14:14.780977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.780987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.780994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef140 is same with the state(6) to be set 00:21:14.143 [2024-11-20 16:14:14.781046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9912c0 (9): Bad file descriptor 00:21:14.143 [2024-11-20 16:14:14.781059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9921b0 (9): Bad file descriptor 00:21:14.143 [2024-11-20 16:14:14.781083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.143 [2024-11-20 16:14:14.781137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.781145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbd030 is same with the state(6) to be set 00:21:14.143 [2024-11-20 16:14:14.781201] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.143 [2024-11-20 16:14:14.781253] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.143 [2024-11-20 16:14:14.781340] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.143 [2024-11-20 16:14:14.781389] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.143 [2024-11-20 16:14:14.782476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:14.143 [2024-11-20 16:14:14.782500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef140 (9): Bad file descriptor 00:21:14.143 [2024-11-20 16:14:14.782548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.143 [2024-11-20 16:14:14.782856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.143 [2024-11-20 16:14:14.782862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.782871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.782878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.782886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.782892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.782905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.782912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.782922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.782929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.782937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.782944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.782958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.782966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.144 [2024-11-20 16:14:14.791673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.144 [2024-11-20 16:14:14.791681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.791691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.791700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.791708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.791716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.791723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.791732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.791748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.791757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.791764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd949b0 is same with the state(6) to be set 00:21:14.145 [2024-11-20 16:14:14.792507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.145 [2024-11-20 16:14:14.792535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a6610 with addr=10.0.0.2, port=4420 00:21:14.145 [2024-11-20 16:14:14.792544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6610 is same with the state(6) to be set 00:21:14.145 [2024-11-20 16:14:14.792605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf03c0 (9): Bad file descriptor 00:21:14.145 [2024-11-20 16:14:14.792627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb2f20 (9): Bad file descriptor 00:21:14.145 [2024-11-20 16:14:14.792646] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:14.145 [2024-11-20 16:14:14.792663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbd030 (9): Bad file descriptor 00:21:14.145 [2024-11-20 16:14:14.793655] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.145 [2024-11-20 16:14:14.794032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:14.145 [2024-11-20 16:14:14.794148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.145 [2024-11-20 16:14:14.794164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef140 with addr=10.0.0.2, port=4420 00:21:14.145 [2024-11-20 16:14:14.794173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef140 is same with the state(6) to be set 00:21:14.145 [2024-11-20 16:14:14.794183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a6610 (9): Bad file descriptor 00:21:14.145 [2024-11-20 16:14:14.794195] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:14.145 [2024-11-20 16:14:14.794240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.145 [2024-11-20 16:14:14.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.145 [2024-11-20 16:14:14.794685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.794983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.794992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.146 [2024-11-20 16:14:14.795229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.146 [2024-11-20 16:14:14.795236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.795245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.795252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.795261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.795268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.795277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.795285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.795294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb96390 is same with the state(6) to be set 00:21:14.147 [2024-11-20 16:14:14.796307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.147 [2024-11-20 16:14:14.796958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.147 [2024-11-20 16:14:14.796968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.796979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.796988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.797550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.797560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb97640 is same with the state(6) to be set 00:21:14.148 [2024-11-20 16:14:14.798884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.798903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.798918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.798928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.798941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.798957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.798969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.798979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.798990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.148 [2024-11-20 16:14:14.799132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.148 [2024-11-20 16:14:14.799145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.149 [2024-11-20 16:14:14.799940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.149 [2024-11-20 16:14:14.799953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.799965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.799974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.799985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.799994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.800247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.800257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd95e30 is same with the state(6) to be set 00:21:14.150 [2024-11-20 16:14:14.801594] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.150 [2024-11-20 16:14:14.801658] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:14.150 [2024-11-20 16:14:14.801716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.801987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.801999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.802009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.802020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.802030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.802042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.802052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.802063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.802073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.802085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.802094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.150 [2024-11-20 16:14:14.802106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.150 [2024-11-20 16:14:14.802115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.151 [2024-11-20 16:14:14.802931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.151 [2024-11-20 16:14:14.802943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.802956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.802969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.802978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.802997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.803006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.803018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.803027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.803039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.803050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.803062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.803071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.803083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.803102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a3c0 is same with the state(6) to be set 00:21:14.152 [2024-11-20 16:14:14.804365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:14.152 [2024-11-20 16:14:14.804388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:14.152 [2024-11-20 16:14:14.804401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:14.152 [2024-11-20 16:14:14.804695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.152 [2024-11-20 16:14:14.804715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9912c0 with addr=10.0.0.2, port=4420 00:21:14.152 [2024-11-20 16:14:14.804726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912c0 is same with the state(6) to be set 00:21:14.152 [2024-11-20 16:14:14.804740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef140 (9): Bad file descriptor 00:21:14.152 [2024-11-20 16:14:14.804752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:14.152 [2024-11-20 16:14:14.804761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:14.152 [2024-11-20 16:14:14.804772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:14.152 [2024-11-20 16:14:14.804783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:14.152 [2024-11-20 16:14:14.804809] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:14.152 [2024-11-20 16:14:14.804846] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:14.152 [2024-11-20 16:14:14.804866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9912c0 (9): Bad file descriptor 00:21:14.152 [2024-11-20 16:14:14.805260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:14.152 [2024-11-20 16:14:14.805448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.152 [2024-11-20 16:14:14.805467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9921b0 with addr=10.0.0.2, port=4420 00:21:14.152 [2024-11-20 16:14:14.805478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9921b0 is same with the state(6) to be set 00:21:14.152 [2024-11-20 16:14:14.805630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.152 [2024-11-20 16:14:14.805645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9861e0 with addr=10.0.0.2, port=4420 00:21:14.152 [2024-11-20 16:14:14.805655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9861e0 is same with the state(6) to be set 00:21:14.152 [2024-11-20 16:14:14.805880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.152 [2024-11-20 16:14:14.805895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x985fe0 with addr=10.0.0.2, port=4420 00:21:14.152 [2024-11-20 16:14:14.805906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x985fe0 is same with the state(6) to be set 00:21:14.152 [2024-11-20 16:14:14.805917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:14.152 [2024-11-20 16:14:14.805926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:14.152 [2024-11-20 16:14:14.805936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:14.152 [2024-11-20 16:14:14.805946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:14.152 [2024-11-20 16:14:14.806822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.806988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.806998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.152 [2024-11-20 16:14:14.807167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.152 [2024-11-20 16:14:14.807174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.153 [2024-11-20 16:14:14.807793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.153 [2024-11-20 16:14:14.807801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.807809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.807817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.807825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.807840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.807849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.807856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.807865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.807872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.807880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.807888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.807895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97370 is same with the state(6) to be set 00:21:14.154 [2024-11-20 16:14:14.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.808920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.808931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.808939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.808954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.808962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.808972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.808980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.808998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.154 [2024-11-20 16:14:14.809429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.154 [2024-11-20 16:14:14.809438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.155 [2024-11-20 16:14:14.809821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.155 [2024-11-20 16:14:14.809828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.809959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.809967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd99df0 is same with the state(6) to be set 00:21:14.156 [2024-11-20 16:14:14.810972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.810988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.810999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.156 [2024-11-20 16:14:14.811267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.156 [2024-11-20 16:14:14.811274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.157 [2024-11-20 16:14:14.811718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.157 [2024-11-20 16:14:14.811725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.811986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.811994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.812002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.812011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.812018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.812027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.158 [2024-11-20 16:14:14.812034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.158 [2024-11-20 16:14:14.812043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0f50 is same with the state(6) to be set 00:21:14.158 [2024-11-20 16:14:14.813238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:14.158 [2024-11-20 16:14:14.813258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:14.158 [2024-11-20 16:14:14.813269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:14.158 task offset: 21888 on job bdev=Nvme6n1 fails 00:21:14.158 00:21:14.158 Latency(us) 00:21:14.158 [2024-11-20T15:14:14.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.158 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.158 Job: Nvme1n1 ended in about 0.71 seconds with error 00:21:14.158 Verification LBA range: start 0x0 length 0x400 00:21:14.158 Nvme1n1 : 0.71 196.71 12.29 90.57 0.00 219592.06 10485.76 216097.84 00:21:14.158 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.158 Job: Nvme2n1 ended in about 0.71 seconds with error 00:21:14.158 Verification LBA range: start 0x0 length 0x400 00:21:14.158 Nvme2n1 : 0.71 180.55 11.28 90.27 0.00 226902.37 16754.42 201508.95 00:21:14.158 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.158 Job: Nvme3n1 ended in about 0.70 seconds with error 00:21:14.158 Verification LBA range: start 0x0 length 0x400 00:21:14.158 Nvme3n1 : 0.70 211.64 13.23 90.91 0.00 197666.29 14531.90 221568.67 00:21:14.158 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.158 Job: Nvme4n1 ended in about 0.71 seconds with error 00:21:14.158 Verification LBA range: start 0x0 length 0x400 00:21:14.158 Nvme4n1 : 0.71 185.48 11.59 89.93 0.00 211386.16 14588.88 219745.06 00:21:14.159 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.159 Job: Nvme5n1 ended in about 0.72 seconds with error 00:21:14.159 Verification LBA range: start 0x0 length 0x400 00:21:14.159 Nvme5n1 : 0.72 177.98 11.12 88.99 0.00 212317.64 21769.35 220656.86 00:21:14.159 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.159 Job: Nvme6n1 ended in about 0.69 seconds with error 00:21:14.159 Verification LBA range: start 0x0 length 0x400 00:21:14.159 Nvme6n1 : 0.69 185.56 11.60 92.78 0.00 196515.84 9289.02 224304.08 00:21:14.159 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.159 Job: Nvme7n1 ended in about 0.72 seconds with error 00:21:14.159 Verification LBA range: start 0x0 length 0x400 00:21:14.159 Nvme7n1 : 0.72 177.47 11.09 88.73 0.00 201033.76 17438.27 220656.86 00:21:14.159 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.159 Job: Nvme8n1 ended in about 0.72 seconds with error 00:21:14.159 Verification LBA range: start 0x0 length 0x400 00:21:14.159 Nvme8n1 : 0.72 176.96 11.06 88.48 0.00 195802.75 18122.13 208803.39 00:21:14.159 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.159 Job: Nvme9n1 ended in about 0.69 seconds with error 00:21:14.159 Verification LBA range: start 0x0 length 0x400 00:21:14.159 Nvme9n1 : 0.69 184.74 11.55 92.37 0.00 179801.93 9232.03 224304.08 00:21:14.159 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:14.159 Job: Nvme10n1 ended in about 0.71 seconds with error 00:21:14.159 Verification LBA range: start 0x0 length 0x400 00:21:14.159 Nvme10n1 : 0.71 89.57 5.60 89.57 0.00 271775.83 19831.76 253481.85 00:21:14.159 [2024-11-20T15:14:14.996Z] =================================================================================================================== 00:21:14.159 [2024-11-20T15:14:14.996Z] Total : 1766.66 110.42 902.61 0.00 209130.53 9232.03 253481.85 00:21:14.159 [2024-11-20 16:14:14.846665] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:14.159 [2024-11-20 16:14:14.846719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:14.159 [2024-11-20 16:14:14.847064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.159 [2024-11-20 16:14:14.847086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd5830 with addr=10.0.0.2, port=4420 00:21:14.159 [2024-11-20 16:14:14.847098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd5830 is same with the state(6) to be set 00:21:14.159 [2024-11-20 16:14:14.847121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9921b0 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.847135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9861e0 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.847146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x985fe0 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.847155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:14.159 [2024-11-20 16:14:14.847163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:14.159 [2024-11-20 16:14:14.847173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:14.159 [2024-11-20 16:14:14.847183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:14.159 [2024-11-20 16:14:14.847507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.159 [2024-11-20 16:14:14.847525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a6610 with addr=10.0.0.2, port=4420 00:21:14.159 [2024-11-20 16:14:14.847534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6610 is same with the state(6) to be set 00:21:14.159 [2024-11-20 16:14:14.847670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.159 [2024-11-20 16:14:14.847681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbd030 with addr=10.0.0.2, port=4420 00:21:14.159 [2024-11-20 16:14:14.847688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbd030 is same with the state(6) to be set 00:21:14.159 [2024-11-20 16:14:14.847832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.159 [2024-11-20 16:14:14.847843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb2f20 with addr=10.0.0.2, port=4420 00:21:14.159 [2024-11-20 16:14:14.847851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f20 is same with the state(6) to be set 00:21:14.159 [2024-11-20 16:14:14.848043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.159 [2024-11-20 16:14:14.848056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf03c0 with addr=10.0.0.2, port=4420 00:21:14.159 [2024-11-20 16:14:14.848064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf03c0 is same with the state(6) to be set 00:21:14.159 [2024-11-20 16:14:14.848073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd5830 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.848082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:14.159 [2024-11-20 16:14:14.848090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:14.159 [2024-11-20 16:14:14.848097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:14.159 [2024-11-20 16:14:14.848105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:14.159 [2024-11-20 16:14:14.848113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:14.159 [2024-11-20 16:14:14.848120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:14.159 [2024-11-20 16:14:14.848126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:14.159 [2024-11-20 16:14:14.848132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:14.159 [2024-11-20 16:14:14.848140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:14.159 [2024-11-20 16:14:14.848146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:14.159 [2024-11-20 16:14:14.848157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:14.159 [2024-11-20 16:14:14.848163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:14.159 [2024-11-20 16:14:14.848205] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:14.159 [2024-11-20 16:14:14.848979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a6610 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.848998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbd030 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.849008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb2f20 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.849017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf03c0 (9): Bad file descriptor 00:21:14.159 [2024-11-20 16:14:14.849026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:14.159 [2024-11-20 16:14:14.849033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.849040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.849047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.849093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:14.160 [2024-11-20 16:14:14.849106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:14.160 [2024-11-20 16:14:14.849114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:14.160 [2024-11-20 16:14:14.849123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:14.160 [2024-11-20 16:14:14.849131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:14.160 [2024-11-20 16:14:14.849164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.849173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.849179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.849186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.849194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.849200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.849206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.849213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.849220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.849227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.849233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.849240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.849250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.849256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.849263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.849270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.849556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.160 [2024-11-20 16:14:14.849571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef140 with addr=10.0.0.2, port=4420 00:21:14.160 [2024-11-20 16:14:14.849580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef140 is same with the state(6) to be set 00:21:14.160 [2024-11-20 16:14:14.849742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.160 [2024-11-20 16:14:14.849754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9912c0 with addr=10.0.0.2, port=4420 00:21:14.160 [2024-11-20 16:14:14.849763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912c0 is same with the state(6) to be set 00:21:14.160 [2024-11-20 16:14:14.849853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.160 [2024-11-20 16:14:14.849865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x985fe0 with addr=10.0.0.2, port=4420 00:21:14.160 [2024-11-20 16:14:14.849873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x985fe0 is same with the state(6) to be set 00:21:14.160 [2024-11-20 16:14:14.849931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.160 [2024-11-20 16:14:14.849941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9861e0 with addr=10.0.0.2, port=4420 00:21:14.160 [2024-11-20 16:14:14.849953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9861e0 is same with the state(6) to be set 00:21:14.160 [2024-11-20 16:14:14.850006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.160 [2024-11-20 16:14:14.850016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9921b0 with addr=10.0.0.2, port=4420 00:21:14.160 [2024-11-20 16:14:14.850024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9921b0 is same with the state(6) to be set 00:21:14.160 [2024-11-20 16:14:14.850054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef140 (9): Bad file descriptor 00:21:14.160 [2024-11-20 16:14:14.850065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9912c0 (9): Bad file descriptor 00:21:14.160 [2024-11-20 16:14:14.850074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x985fe0 (9): Bad file descriptor 00:21:14.160 [2024-11-20 16:14:14.850084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9861e0 (9): Bad file descriptor 00:21:14.160 [2024-11-20 16:14:14.850093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9921b0 (9): Bad file descriptor 00:21:14.160 [2024-11-20 16:14:14.850116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.850125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.850133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.850141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.850149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.850156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.850166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.850173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.850179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.850185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.850192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.850199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.850206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.850213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:14.160 [2024-11-20 16:14:14.850220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:14.160 [2024-11-20 16:14:14.850226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:14.160 [2024-11-20 16:14:14.850232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:14.160 [2024-11-20 16:14:14.850239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:14.161 [2024-11-20 16:14:14.850246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:14.161 [2024-11-20 16:14:14.850252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:14.420 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:15.357 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2795019 00:21:15.357 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:15.357 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2795019 00:21:15.357 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:15.357 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.357 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2795019 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.358 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.358 rmmod nvme_tcp 00:21:15.617 rmmod nvme_fabrics 00:21:15.617 rmmod nvme_keyring 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2794862 ']' 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2794862 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2794862 ']' 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2794862 00:21:15.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2794862) - No such process 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2794862 is not found' 00:21:15.617 Process with pid 2794862 is not found 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.617 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.523 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:17.523 00:21:17.523 real 0m7.061s 00:21:17.523 user 0m16.028s 00:21:17.523 sys 0m1.287s 00:21:17.523 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.523 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.523 ************************************ 00:21:17.523 END TEST nvmf_shutdown_tc3 00:21:17.523 ************************************ 00:21:17.523 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:17.783 ************************************ 00:21:17.783 START TEST nvmf_shutdown_tc4 00:21:17.783 ************************************ 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:17.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:17.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:17.783 Found net devices under 0000:86:00.0: cvl_0_0 00:21:17.783 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:17.784 Found net devices under 0000:86:00.1: cvl_0_1 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:17.784 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:21:18.043 00:21:18.043 --- 10.0.0.2 ping statistics --- 00:21:18.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.043 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:18.043 00:21:18.043 --- 10.0.0.1 ping statistics --- 00:21:18.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.043 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2796182 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2796182 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2796182 ']' 00:21:18.043 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.044 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.044 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.044 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.044 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.044 [2024-11-20 16:14:18.754636] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:18.044 [2024-11-20 16:14:18.754679] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.044 [2024-11-20 16:14:18.834124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.044 [2024-11-20 16:14:18.876874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.044 [2024-11-20 16:14:18.876911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.044 [2024-11-20 16:14:18.876919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.044 [2024-11-20 16:14:18.876926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.044 [2024-11-20 16:14:18.876931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.303 [2024-11-20 16:14:18.878377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.303 [2024-11-20 16:14:18.878490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.303 [2024-11-20 16:14:18.878596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.303 [2024-11-20 16:14:18.878596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:18.303 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.303 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:18.303 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.303 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.303 16:14:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.303 [2024-11-20 16:14:19.020360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.303 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.303 Malloc1 00:21:18.303 [2024-11-20 16:14:19.124650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.564 Malloc2 00:21:18.564 Malloc3 00:21:18.564 Malloc4 00:21:18.564 Malloc5 00:21:18.564 Malloc6 00:21:18.564 Malloc7 00:21:18.824 Malloc8 00:21:18.824 Malloc9 00:21:18.824 Malloc10 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2796329 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:18.824 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:18.824 [2024-11-20 16:14:19.637721] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2796182 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2796182 ']' 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2796182 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796182 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796182' 00:21:24.101 killing process with pid 2796182 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2796182 00:21:24.101 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2796182 00:21:24.101 [2024-11-20 16:14:24.636317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9daa0 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.636984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9cc10 is same with the state(6) to be set 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 starting I/O failed: -6 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 starting I/O failed: -6 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 starting I/O failed: -6 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 starting I/O failed: -6 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 starting I/O failed: -6 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 starting I/O failed: -6 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 [2024-11-20 16:14:24.640101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 [2024-11-20 16:14:24.640125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with tstarting I/O failed: -6 00:21:24.101 he state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.640133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.101 [2024-11-20 16:14:24.640140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 [2024-11-20 16:14:24.640146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.101 Write completed with error (sct=0, sc=8) 00:21:24.101 [2024-11-20 16:14:24.640157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.640163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.640170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b460 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.640410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.641202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a5f0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.641231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a5f0 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.641239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a5f0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.641246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a5f0 is same with tstarting I/O failed: -6 00:21:24.102 he state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.641255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a5f0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.641486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.641689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.641714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.641722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with tWrite completed with error (sct=0, sc=8) 00:21:24.102 he state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.641731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.641739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.641745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.641752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.641759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.641766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8aac0 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8af90 is same with tWrite completed with error (sct=0, sc=8) 00:21:24.102 he state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 [2024-11-20 16:14:24.642481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 starting I/O failed: -6 00:21:24.102 [2024-11-20 16:14:24.642487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.102 [2024-11-20 16:14:24.642517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 [2024-11-20 16:14:24.642531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a120 is same with the state(6) to be set 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.102 Write completed with error (sct=0, sc=8) 00:21:24.102 starting I/O failed: -6 00:21:24.103 [2024-11-20 16:14:24.644087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.103 NVMe io qpair process completion error 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 [2024-11-20 16:14:24.645150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.103 [2024-11-20 16:14:24.645196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 [2024-11-20 16:14:24.645216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 [2024-11-20 16:14:24.645223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 [2024-11-20 16:14:24.645230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 [2024-11-20 16:14:24.645236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 [2024-11-20 16:14:24.645242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 [2024-11-20 16:14:24.645249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c7a0 is same with the state(6) to be set 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 [2024-11-20 16:14:24.646232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.103 starting I/O failed: -6 00:21:24.103 starting I/O failed: -6 00:21:24.103 starting I/O failed: -6 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 [2024-11-20 16:14:24.647403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.103 Write completed with error (sct=0, sc=8) 00:21:24.103 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 [2024-11-20 16:14:24.649216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.104 NVMe io qpair process completion error 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 [2024-11-20 16:14:24.650107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 [2024-11-20 16:14:24.651033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 [2024-11-20 16:14:24.652041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.104 starting I/O failed: -6 00:21:24.104 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 [2024-11-20 16:14:24.653557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.105 NVMe io qpair process completion error 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 [2024-11-20 16:14:24.655078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 [2024-11-20 16:14:24.655885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 Write completed with error (sct=0, sc=8) 00:21:24.105 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 [2024-11-20 16:14:24.656913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 [2024-11-20 16:14:24.659153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.106 NVMe io qpair process completion error 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 [2024-11-20 16:14:24.660128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 Write completed with error (sct=0, sc=8) 00:21:24.106 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 [2024-11-20 16:14:24.661029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 [2024-11-20 16:14:24.662026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 [2024-11-20 16:14:24.666660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.107 NVMe io qpair process completion error 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 Write completed with error (sct=0, sc=8) 00:21:24.107 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 [2024-11-20 16:14:24.667647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 [2024-11-20 16:14:24.668554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 [2024-11-20 16:14:24.669568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.108 Write completed with error (sct=0, sc=8) 00:21:24.108 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 [2024-11-20 16:14:24.673260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.109 NVMe io qpair process completion error 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 [2024-11-20 16:14:24.674271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 [2024-11-20 16:14:24.675192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 [2024-11-20 16:14:24.676243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.109 Write completed with error (sct=0, sc=8) 00:21:24.109 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 [2024-11-20 16:14:24.677918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.110 NVMe io qpair process completion error 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.110 starting I/O failed: -6 00:21:24.110 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 [2024-11-20 16:14:24.682548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.111 NVMe io qpair process completion error 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 [2024-11-20 16:14:24.683714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 starting I/O failed: -6 00:21:24.111 Write completed with error (sct=0, sc=8) 00:21:24.111 [2024-11-20 16:14:24.684614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 [2024-11-20 16:14:24.685663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 [2024-11-20 16:14:24.692048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.112 NVMe io qpair process completion error 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 Write completed with error (sct=0, sc=8) 00:21:24.112 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 [2024-11-20 16:14:24.693074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 [2024-11-20 16:14:24.693984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.113 starting I/O failed: -6 00:21:24.113 starting I/O failed: -6 00:21:24.113 starting I/O failed: -6 00:21:24.113 starting I/O failed: -6 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 [2024-11-20 16:14:24.695201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 Write completed with error (sct=0, sc=8) 00:21:24.113 starting I/O failed: -6 00:21:24.113 [2024-11-20 16:14:24.697608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.114 NVMe io qpair process completion error 00:21:24.114 Initializing NVMe Controllers 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:24.114 Controller IO queue size 128, less than required. 00:21:24.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:24.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:24.114 Initialization complete. Launching workers. 00:21:24.114 ======================================================== 00:21:24.114 Latency(us) 00:21:24.114 Device Information : IOPS MiB/s Average min max 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2118.06 91.01 60435.67 940.78 109301.55 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2144.59 92.15 59704.72 688.00 109063.18 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2154.81 92.59 59438.94 922.98 105623.31 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2120.45 91.11 60453.86 708.83 109769.48 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2161.55 92.88 59341.32 759.38 113991.02 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2208.53 94.90 58085.00 737.95 116383.43 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2207.88 94.87 58122.67 936.32 118830.90 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2188.30 94.03 58715.37 797.71 125538.90 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2169.82 93.23 58494.41 1058.15 99974.43 00:21:24.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2085.22 89.60 60883.07 792.33 100210.99 00:21:24.114 ======================================================== 00:21:24.114 Total : 21559.21 926.37 59351.01 688.00 125538.90 00:21:24.114 00:21:24.114 [2024-11-20 16:14:24.700636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd2720 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0ef0 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0890 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd1740 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd1410 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd2900 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd1a70 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0560 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd2ae0 is same with the state(6) to be set 00:21:24.114 [2024-11-20 16:14:24.700913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0bc0 is same with the state(6) to be set 00:21:24.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:24.373 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2796329 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2796329 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2796329 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.310 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.311 rmmod nvme_tcp 00:21:25.311 rmmod nvme_fabrics 00:21:25.311 rmmod nvme_keyring 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2796182 ']' 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2796182 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2796182 ']' 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2796182 00:21:25.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2796182) - No such process 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2796182 is not found' 00:21:25.311 Process with pid 2796182 is not found 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.311 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.992 00:21:27.992 real 0m9.779s 00:21:27.992 user 0m25.055s 00:21:27.992 sys 0m5.106s 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:27.992 ************************************ 00:21:27.992 END TEST nvmf_shutdown_tc4 00:21:27.992 ************************************ 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:27.992 00:21:27.992 real 0m41.181s 00:21:27.992 user 1m41.603s 00:21:27.992 sys 0m14.090s 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:27.992 ************************************ 00:21:27.992 END TEST nvmf_shutdown 00:21:27.992 ************************************ 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.992 ************************************ 00:21:27.992 START TEST nvmf_nsid 00:21:27.992 ************************************ 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:27.992 * Looking for test storage... 00:21:27.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.992 --rc genhtml_branch_coverage=1 00:21:27.992 --rc genhtml_function_coverage=1 00:21:27.992 --rc genhtml_legend=1 00:21:27.992 --rc geninfo_all_blocks=1 00:21:27.992 --rc geninfo_unexecuted_blocks=1 00:21:27.992 00:21:27.992 ' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.992 --rc genhtml_branch_coverage=1 00:21:27.992 --rc genhtml_function_coverage=1 00:21:27.992 --rc genhtml_legend=1 00:21:27.992 --rc geninfo_all_blocks=1 00:21:27.992 --rc geninfo_unexecuted_blocks=1 00:21:27.992 00:21:27.992 ' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.992 --rc genhtml_branch_coverage=1 00:21:27.992 --rc genhtml_function_coverage=1 00:21:27.992 --rc genhtml_legend=1 00:21:27.992 --rc geninfo_all_blocks=1 00:21:27.992 --rc geninfo_unexecuted_blocks=1 00:21:27.992 00:21:27.992 ' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:27.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.992 --rc genhtml_branch_coverage=1 00:21:27.992 --rc genhtml_function_coverage=1 00:21:27.992 --rc genhtml_legend=1 00:21:27.992 --rc geninfo_all_blocks=1 00:21:27.992 --rc geninfo_unexecuted_blocks=1 00:21:27.992 00:21:27.992 ' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.992 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.993 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.330 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.330 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.331 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.331 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.331 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:33.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:21:33.590 00:21:33.590 --- 10.0.0.2 ping statistics --- 00:21:33.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.590 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:33.590 00:21:33.590 --- 10.0.0.1 ping statistics --- 00:21:33.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.590 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2800921 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2800921 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2800921 ']' 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.590 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.591 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.591 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:33.850 [2024-11-20 16:14:34.466730] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:33.850 [2024-11-20 16:14:34.466775] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.850 [2024-11-20 16:14:34.547815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.850 [2024-11-20 16:14:34.588542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.850 [2024-11-20 16:14:34.588578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.850 [2024-11-20 16:14:34.588585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.850 [2024-11-20 16:14:34.588592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.850 [2024-11-20 16:14:34.588597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.850 [2024-11-20 16:14:34.589153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.850 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.850 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:33.850 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.850 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.850 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2800944 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8324640f-ce95-456d-ab64-bc3a09cd4ab1 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c7a7f469-7274-4126-83d2-7c957c201b39 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=940dcdfd-d5d9-4cd3-9546-8a4dbfab6f2b 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.109 null0 00:21:34.109 null1 00:21:34.109 [2024-11-20 16:14:34.768658] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:34.109 [2024-11-20 16:14:34.768703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800944 ] 00:21:34.109 null2 00:21:34.109 [2024-11-20 16:14:34.772955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.109 [2024-11-20 16:14:34.797154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2800944 /var/tmp/tgt2.sock 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2800944 ']' 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:34.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.109 16:14:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.109 [2024-11-20 16:14:34.843261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.109 [2024-11-20 16:14:34.889798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.369 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.369 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:34.369 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:34.629 [2024-11-20 16:14:35.420475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.629 [2024-11-20 16:14:35.436585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:34.889 nvme0n1 nvme0n2 00:21:34.889 nvme1n1 00:21:34.889 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:34.889 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:34.889 16:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:35.826 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:35.827 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:35.827 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:35.827 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:35.827 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:35.827 16:14:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8324640f-ce95-456d-ab64-bc3a09cd4ab1 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:36.764 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8324640fce95456dab64bc3a09cd4ab1 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8324640FCE95456DAB64BC3A09CD4AB1 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8324640FCE95456DAB64BC3A09CD4AB1 == \8\3\2\4\6\4\0\F\C\E\9\5\4\5\6\D\A\B\6\4\B\C\3\A\0\9\C\D\4\A\B\1 ]] 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c7a7f469-7274-4126-83d2-7c957c201b39 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c7a7f4697274412683d27c957c201b39 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C7A7F4697274412683D27C957C201B39 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C7A7F4697274412683D27C957C201B39 == \C\7\A\7\F\4\6\9\7\2\7\4\4\1\2\6\8\3\D\2\7\C\9\5\7\C\2\0\1\B\3\9 ]] 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 940dcdfd-d5d9-4cd3-9546-8a4dbfab6f2b 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=940dcdfdd5d94cd395468a4dbfab6f2b 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 940DCDFDD5D94CD395468A4DBFAB6F2B 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 940DCDFDD5D94CD395468A4DBFAB6F2B == \9\4\0\D\C\D\F\D\D\5\D\9\4\C\D\3\9\5\4\6\8\A\4\D\B\F\A\B\6\F\2\B ]] 00:21:37.023 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2800944 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2800944 ']' 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2800944 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.283 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800944 00:21:37.283 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.283 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.283 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800944' 00:21:37.283 killing process with pid 2800944 00:21:37.283 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2800944 00:21:37.283 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2800944 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.542 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.542 rmmod nvme_tcp 00:21:37.542 rmmod nvme_fabrics 00:21:37.542 rmmod nvme_keyring 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2800921 ']' 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2800921 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2800921 ']' 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2800921 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800921 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800921' 00:21:37.802 killing process with pid 2800921 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2800921 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2800921 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.802 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.339 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.339 00:21:40.339 real 0m12.376s 00:21:40.339 user 0m9.638s 00:21:40.339 sys 0m5.521s 00:21:40.339 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.339 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.339 ************************************ 00:21:40.339 END TEST nvmf_nsid 00:21:40.339 ************************************ 00:21:40.339 16:14:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:40.339 00:21:40.339 real 12m7.021s 00:21:40.339 user 26m16.968s 00:21:40.339 sys 3m41.816s 00:21:40.339 16:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.339 16:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.339 ************************************ 00:21:40.339 END TEST nvmf_target_extra 00:21:40.339 ************************************ 00:21:40.339 16:14:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:40.339 16:14:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.339 16:14:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.339 16:14:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.339 ************************************ 00:21:40.339 START TEST nvmf_host 00:21:40.339 ************************************ 00:21:40.339 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:40.339 * Looking for test storage... 00:21:40.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:40.339 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.339 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.339 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.340 --rc genhtml_branch_coverage=1 00:21:40.340 --rc genhtml_function_coverage=1 00:21:40.340 --rc genhtml_legend=1 00:21:40.340 --rc geninfo_all_blocks=1 00:21:40.340 --rc geninfo_unexecuted_blocks=1 00:21:40.340 00:21:40.340 ' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.340 --rc genhtml_branch_coverage=1 00:21:40.340 --rc genhtml_function_coverage=1 00:21:40.340 --rc genhtml_legend=1 00:21:40.340 --rc geninfo_all_blocks=1 00:21:40.340 --rc geninfo_unexecuted_blocks=1 00:21:40.340 00:21:40.340 ' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.340 --rc genhtml_branch_coverage=1 00:21:40.340 --rc genhtml_function_coverage=1 00:21:40.340 --rc genhtml_legend=1 00:21:40.340 --rc geninfo_all_blocks=1 00:21:40.340 --rc geninfo_unexecuted_blocks=1 00:21:40.340 00:21:40.340 ' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.340 --rc genhtml_branch_coverage=1 00:21:40.340 --rc genhtml_function_coverage=1 00:21:40.340 --rc genhtml_legend=1 00:21:40.340 --rc geninfo_all_blocks=1 00:21:40.340 --rc geninfo_unexecuted_blocks=1 00:21:40.340 00:21:40.340 ' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.340 16:14:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.340 ************************************ 00:21:40.340 START TEST nvmf_multicontroller 00:21:40.340 ************************************ 00:21:40.340 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:40.340 * Looking for test storage... 00:21:40.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:40.340 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.340 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.340 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.600 --rc genhtml_branch_coverage=1 00:21:40.600 --rc genhtml_function_coverage=1 00:21:40.600 --rc genhtml_legend=1 00:21:40.600 --rc geninfo_all_blocks=1 00:21:40.600 --rc geninfo_unexecuted_blocks=1 00:21:40.600 00:21:40.600 ' 00:21:40.600 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.601 --rc genhtml_branch_coverage=1 00:21:40.601 --rc genhtml_function_coverage=1 00:21:40.601 --rc genhtml_legend=1 00:21:40.601 --rc geninfo_all_blocks=1 00:21:40.601 --rc geninfo_unexecuted_blocks=1 00:21:40.601 00:21:40.601 ' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.601 --rc genhtml_branch_coverage=1 00:21:40.601 --rc genhtml_function_coverage=1 00:21:40.601 --rc genhtml_legend=1 00:21:40.601 --rc geninfo_all_blocks=1 00:21:40.601 --rc geninfo_unexecuted_blocks=1 00:21:40.601 00:21:40.601 ' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.601 --rc genhtml_branch_coverage=1 00:21:40.601 --rc genhtml_function_coverage=1 00:21:40.601 --rc genhtml_legend=1 00:21:40.601 --rc geninfo_all_blocks=1 00:21:40.601 --rc geninfo_unexecuted_blocks=1 00:21:40.601 00:21:40.601 ' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.601 16:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.170 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.170 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.170 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.171 16:14:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:21:47.171 00:21:47.171 --- 10.0.0.2 ping statistics --- 00:21:47.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.171 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:47.171 00:21:47.171 --- 10.0.0.1 ping statistics --- 00:21:47.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.171 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2805232 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2805232 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2805232 ']' 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 [2024-11-20 16:14:47.197103] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:47.171 [2024-11-20 16:14:47.197151] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.171 [2024-11-20 16:14:47.278262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.171 [2024-11-20 16:14:47.318185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.171 [2024-11-20 16:14:47.318222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.171 [2024-11-20 16:14:47.318228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.171 [2024-11-20 16:14:47.318234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.171 [2024-11-20 16:14:47.318239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.171 [2024-11-20 16:14:47.319707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.171 [2024-11-20 16:14:47.319813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.171 [2024-11-20 16:14:47.319814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 [2024-11-20 16:14:47.469788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 Malloc0 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 [2024-11-20 16:14:47.536058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 [2024-11-20 16:14:47.543953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 Malloc1 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.171 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2805278 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2805278 /var/tmp/bdevperf.sock 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2805278 ']' 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 NVMe0n1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.172 1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 request: 00:21:47.172 { 00:21:47.172 "name": "NVMe0", 00:21:47.172 "trtype": "tcp", 00:21:47.172 "traddr": "10.0.0.2", 00:21:47.172 "adrfam": "ipv4", 00:21:47.172 "trsvcid": "4420", 00:21:47.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.172 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:47.172 "hostaddr": "10.0.0.1", 00:21:47.172 "prchk_reftag": false, 00:21:47.172 "prchk_guard": false, 00:21:47.172 "hdgst": false, 00:21:47.172 "ddgst": false, 00:21:47.172 "allow_unrecognized_csi": false, 00:21:47.172 "method": "bdev_nvme_attach_controller", 00:21:47.172 "req_id": 1 00:21:47.172 } 00:21:47.172 Got JSON-RPC error response 00:21:47.172 response: 00:21:47.172 { 00:21:47.172 "code": -114, 00:21:47.172 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:47.172 } 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.172 request: 00:21:47.172 { 00:21:47.172 "name": "NVMe0", 00:21:47.172 "trtype": "tcp", 00:21:47.172 "traddr": "10.0.0.2", 00:21:47.172 "adrfam": "ipv4", 00:21:47.172 "trsvcid": "4420", 00:21:47.172 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.172 "hostaddr": "10.0.0.1", 00:21:47.172 "prchk_reftag": false, 00:21:47.172 "prchk_guard": false, 00:21:47.172 "hdgst": false, 00:21:47.172 "ddgst": false, 00:21:47.172 "allow_unrecognized_csi": false, 00:21:47.172 "method": "bdev_nvme_attach_controller", 00:21:47.172 "req_id": 1 00:21:47.172 } 00:21:47.172 Got JSON-RPC error response 00:21:47.172 response: 00:21:47.172 { 00:21:47.172 "code": -114, 00:21:47.172 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:47.172 } 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:47.172 16:14:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.172 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:47.172 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.172 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.431 request: 00:21:47.431 { 00:21:47.431 "name": "NVMe0", 00:21:47.431 "trtype": "tcp", 00:21:47.431 "traddr": "10.0.0.2", 00:21:47.431 "adrfam": "ipv4", 00:21:47.431 "trsvcid": "4420", 00:21:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.431 "hostaddr": "10.0.0.1", 00:21:47.431 "prchk_reftag": false, 00:21:47.431 "prchk_guard": false, 00:21:47.431 "hdgst": false, 00:21:47.431 "ddgst": false, 00:21:47.431 "multipath": "disable", 00:21:47.431 "allow_unrecognized_csi": false, 00:21:47.431 "method": "bdev_nvme_attach_controller", 00:21:47.431 "req_id": 1 00:21:47.431 } 00:21:47.431 Got JSON-RPC error response 00:21:47.431 response: 00:21:47.431 { 00:21:47.431 "code": -114, 00:21:47.431 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:47.431 } 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.431 request: 00:21:47.431 { 00:21:47.431 "name": "NVMe0", 00:21:47.431 "trtype": "tcp", 00:21:47.431 "traddr": "10.0.0.2", 00:21:47.431 "adrfam": "ipv4", 00:21:47.431 "trsvcid": "4420", 00:21:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.431 "hostaddr": "10.0.0.1", 00:21:47.431 "prchk_reftag": false, 00:21:47.431 "prchk_guard": false, 00:21:47.431 "hdgst": false, 00:21:47.431 "ddgst": false, 00:21:47.431 "multipath": "failover", 00:21:47.431 "allow_unrecognized_csi": false, 00:21:47.431 "method": "bdev_nvme_attach_controller", 00:21:47.431 "req_id": 1 00:21:47.431 } 00:21:47.431 Got JSON-RPC error response 00:21:47.431 response: 00:21:47.431 { 00:21:47.431 "code": -114, 00:21:47.431 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:47.431 } 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.431 NVMe0n1 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.431 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.431 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.689 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.689 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:47.689 16:14:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.622 { 00:21:48.622 "results": [ 00:21:48.622 { 00:21:48.622 "job": "NVMe0n1", 00:21:48.622 "core_mask": "0x1", 00:21:48.622 "workload": "write", 00:21:48.622 "status": "finished", 00:21:48.622 "queue_depth": 128, 00:21:48.622 "io_size": 4096, 00:21:48.622 "runtime": 1.007255, 00:21:48.622 "iops": 24340.410323105865, 00:21:48.622 "mibps": 95.07972782463229, 00:21:48.622 "io_failed": 0, 00:21:48.622 "io_timeout": 0, 00:21:48.622 "avg_latency_us": 5249.383148445356, 00:21:48.622 "min_latency_us": 3205.5652173913045, 00:21:48.622 "max_latency_us": 13620.090434782609 00:21:48.622 } 00:21:48.622 ], 00:21:48.622 "core_count": 1 00:21:48.622 } 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2805278 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2805278 ']' 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2805278 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.622 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2805278 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2805278' 00:21:48.880 killing process with pid 2805278 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2805278 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2805278 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:48.880 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:48.880 [2024-11-20 16:14:47.644578] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:48.880 [2024-11-20 16:14:47.644626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805278 ] 00:21:48.880 [2024-11-20 16:14:47.718861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.880 [2024-11-20 16:14:47.760560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.880 [2024-11-20 16:14:48.259613] bdev.c:4765:bdev_name_add: *ERROR*: Bdev name c427030c-3c36-4bef-8552-1b658e7c3f12 already exists 00:21:48.880 [2024-11-20 16:14:48.259641] bdev.c:7965:bdev_register: *ERROR*: Unable to add uuid:c427030c-3c36-4bef-8552-1b658e7c3f12 alias for bdev NVMe1n1 00:21:48.880 [2024-11-20 16:14:48.259649] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:48.880 Running I/O for 1 seconds... 00:21:48.880 24279.00 IOPS, 94.84 MiB/s 00:21:48.880 Latency(us) 00:21:48.880 [2024-11-20T15:14:49.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.880 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:48.880 NVMe0n1 : 1.01 24340.41 95.08 0.00 0.00 5249.38 3205.57 13620.09 00:21:48.880 [2024-11-20T15:14:49.717Z] =================================================================================================================== 00:21:48.880 [2024-11-20T15:14:49.717Z] Total : 24340.41 95.08 0.00 0.00 5249.38 3205.57 13620.09 00:21:48.880 Received shutdown signal, test time was about 1.000000 seconds 00:21:48.880 00:21:48.880 Latency(us) 00:21:48.880 [2024-11-20T15:14:49.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.880 [2024-11-20T15:14:49.717Z] =================================================================================================================== 00:21:48.880 [2024-11-20T15:14:49.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.880 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:48.880 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.881 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:48.881 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.881 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:48.881 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.881 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.881 rmmod nvme_tcp 00:21:48.881 rmmod nvme_fabrics 00:21:49.140 rmmod nvme_keyring 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2805232 ']' 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2805232 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2805232 ']' 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2805232 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2805232 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2805232' 00:21:49.140 killing process with pid 2805232 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2805232 00:21:49.140 16:14:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2805232 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.400 16:14:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.304 16:14:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.304 00:21:51.304 real 0m11.058s 00:21:51.304 user 0m11.932s 00:21:51.304 sys 0m5.152s 00:21:51.304 16:14:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.304 16:14:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.304 ************************************ 00:21:51.305 END TEST nvmf_multicontroller 00:21:51.305 ************************************ 00:21:51.305 16:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:51.305 16:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.305 16:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.305 16:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.564 ************************************ 00:21:51.564 START TEST nvmf_aer 00:21:51.564 ************************************ 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:51.564 * Looking for test storage... 00:21:51.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.564 --rc genhtml_branch_coverage=1 00:21:51.564 --rc genhtml_function_coverage=1 00:21:51.564 --rc genhtml_legend=1 00:21:51.564 --rc geninfo_all_blocks=1 00:21:51.564 --rc geninfo_unexecuted_blocks=1 00:21:51.564 00:21:51.564 ' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.564 --rc genhtml_branch_coverage=1 00:21:51.564 --rc genhtml_function_coverage=1 00:21:51.564 --rc genhtml_legend=1 00:21:51.564 --rc geninfo_all_blocks=1 00:21:51.564 --rc geninfo_unexecuted_blocks=1 00:21:51.564 00:21:51.564 ' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.564 --rc genhtml_branch_coverage=1 00:21:51.564 --rc genhtml_function_coverage=1 00:21:51.564 --rc genhtml_legend=1 00:21:51.564 --rc geninfo_all_blocks=1 00:21:51.564 --rc geninfo_unexecuted_blocks=1 00:21:51.564 00:21:51.564 ' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.564 --rc genhtml_branch_coverage=1 00:21:51.564 --rc genhtml_function_coverage=1 00:21:51.564 --rc genhtml_legend=1 00:21:51.564 --rc geninfo_all_blocks=1 00:21:51.564 --rc geninfo_unexecuted_blocks=1 00:21:51.564 00:21:51.564 ' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.564 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.565 16:14:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.136 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:58.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:58.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:58.137 Found net devices under 0000:86:00.0: cvl_0_0 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:58.137 Found net devices under 0000:86:00.1: cvl_0_1 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.137 16:14:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:21:58.137 00:21:58.137 --- 10.0.0.2 ping statistics --- 00:21:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.137 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:58.137 00:21:58.137 --- 10.0.0.1 ping statistics --- 00:21:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.137 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2809056 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2809056 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2809056 ']' 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.137 [2024-11-20 16:14:58.316507] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:21:58.137 [2024-11-20 16:14:58.316555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.137 [2024-11-20 16:14:58.395918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.137 [2024-11-20 16:14:58.439206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.137 [2024-11-20 16:14:58.439245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.137 [2024-11-20 16:14:58.439253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.137 [2024-11-20 16:14:58.439260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.137 [2024-11-20 16:14:58.439265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.137 [2024-11-20 16:14:58.440836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.137 [2024-11-20 16:14:58.440944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.137 [2024-11-20 16:14:58.441058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.137 [2024-11-20 16:14:58.441059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:58.137 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 [2024-11-20 16:14:58.578926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 Malloc0 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 [2024-11-20 16:14:58.639887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.138 [ 00:21:58.138 { 00:21:58.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.138 "subtype": "Discovery", 00:21:58.138 "listen_addresses": [], 00:21:58.138 "allow_any_host": true, 00:21:58.138 "hosts": [] 00:21:58.138 }, 00:21:58.138 { 00:21:58.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.138 "subtype": "NVMe", 00:21:58.138 "listen_addresses": [ 00:21:58.138 { 00:21:58.138 "trtype": "TCP", 00:21:58.138 "adrfam": "IPv4", 00:21:58.138 "traddr": "10.0.0.2", 00:21:58.138 "trsvcid": "4420" 00:21:58.138 } 00:21:58.138 ], 00:21:58.138 "allow_any_host": true, 00:21:58.138 "hosts": [], 00:21:58.138 "serial_number": "SPDK00000000000001", 00:21:58.138 "model_number": "SPDK bdev Controller", 00:21:58.138 "max_namespaces": 2, 00:21:58.138 "min_cntlid": 1, 00:21:58.138 "max_cntlid": 65519, 00:21:58.138 "namespaces": [ 00:21:58.138 { 00:21:58.138 "nsid": 1, 00:21:58.138 "bdev_name": "Malloc0", 00:21:58.138 "name": "Malloc0", 00:21:58.138 "nguid": "E14903C64A2C4DAABF07BA57ECF344D1", 00:21:58.138 "uuid": "e14903c6-4a2c-4daa-bf07-ba57ecf344d1" 00:21:58.138 } 00:21:58.138 ] 00:21:58.138 } 00:21:58.138 ] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2809287 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:58.138 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:58.398 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.398 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:58.398 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:58.398 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:58.398 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.398 16:14:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.398 Malloc1 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.398 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.398 Asynchronous Event Request test 00:21:58.398 Attaching to 10.0.0.2 00:21:58.398 Attached to 10.0.0.2 00:21:58.398 Registering asynchronous event callbacks... 00:21:58.398 Starting namespace attribute notice tests for all controllers... 00:21:58.399 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:58.399 aer_cb - Changed Namespace 00:21:58.399 Cleaning up... 00:21:58.399 [ 00:21:58.399 { 00:21:58.399 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.399 "subtype": "Discovery", 00:21:58.399 "listen_addresses": [], 00:21:58.399 "allow_any_host": true, 00:21:58.399 "hosts": [] 00:21:58.399 }, 00:21:58.399 { 00:21:58.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.399 "subtype": "NVMe", 00:21:58.399 "listen_addresses": [ 00:21:58.399 { 00:21:58.399 "trtype": "TCP", 00:21:58.399 "adrfam": "IPv4", 00:21:58.399 "traddr": "10.0.0.2", 00:21:58.399 "trsvcid": "4420" 00:21:58.399 } 00:21:58.399 ], 00:21:58.399 "allow_any_host": true, 00:21:58.399 "hosts": [], 00:21:58.399 "serial_number": "SPDK00000000000001", 00:21:58.399 "model_number": "SPDK bdev Controller", 00:21:58.399 "max_namespaces": 2, 00:21:58.399 "min_cntlid": 1, 00:21:58.399 "max_cntlid": 65519, 00:21:58.399 "namespaces": [ 00:21:58.399 { 00:21:58.399 "nsid": 1, 00:21:58.399 "bdev_name": "Malloc0", 00:21:58.399 "name": "Malloc0", 00:21:58.399 "nguid": "E14903C64A2C4DAABF07BA57ECF344D1", 00:21:58.399 "uuid": "e14903c6-4a2c-4daa-bf07-ba57ecf344d1" 00:21:58.399 }, 00:21:58.399 { 00:21:58.399 "nsid": 2, 00:21:58.399 "bdev_name": "Malloc1", 00:21:58.399 "name": "Malloc1", 00:21:58.399 "nguid": "D7D777ECD1244D7F92E711E70D000100", 00:21:58.399 "uuid": "d7d777ec-d124-4d7f-92e7-11e70d000100" 00:21:58.399 } 00:21:58.399 ] 00:21:58.399 } 00:21:58.399 ] 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2809287 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.399 rmmod nvme_tcp 00:21:58.399 rmmod nvme_fabrics 00:21:58.399 rmmod nvme_keyring 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2809056 ']' 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2809056 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2809056 ']' 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2809056 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809056 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809056' 00:21:58.399 killing process with pid 2809056 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2809056 00:21:58.399 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2809056 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.658 16:14:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.194 00:22:01.194 real 0m9.317s 00:22:01.194 user 0m5.507s 00:22:01.194 sys 0m4.903s 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:01.194 ************************************ 00:22:01.194 END TEST nvmf_aer 00:22:01.194 ************************************ 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.194 ************************************ 00:22:01.194 START TEST nvmf_async_init 00:22:01.194 ************************************ 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:01.194 * Looking for test storage... 00:22:01.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:01.194 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.195 --rc genhtml_branch_coverage=1 00:22:01.195 --rc genhtml_function_coverage=1 00:22:01.195 --rc genhtml_legend=1 00:22:01.195 --rc geninfo_all_blocks=1 00:22:01.195 --rc geninfo_unexecuted_blocks=1 00:22:01.195 00:22:01.195 ' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.195 --rc genhtml_branch_coverage=1 00:22:01.195 --rc genhtml_function_coverage=1 00:22:01.195 --rc genhtml_legend=1 00:22:01.195 --rc geninfo_all_blocks=1 00:22:01.195 --rc geninfo_unexecuted_blocks=1 00:22:01.195 00:22:01.195 ' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.195 --rc genhtml_branch_coverage=1 00:22:01.195 --rc genhtml_function_coverage=1 00:22:01.195 --rc genhtml_legend=1 00:22:01.195 --rc geninfo_all_blocks=1 00:22:01.195 --rc geninfo_unexecuted_blocks=1 00:22:01.195 00:22:01.195 ' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.195 --rc genhtml_branch_coverage=1 00:22:01.195 --rc genhtml_function_coverage=1 00:22:01.195 --rc genhtml_legend=1 00:22:01.195 --rc geninfo_all_blocks=1 00:22:01.195 --rc geninfo_unexecuted_blocks=1 00:22:01.195 00:22:01.195 ' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0617d05a8165479abd2d4efb6a6caac7 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.195 16:15:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:07.764 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:07.764 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:07.764 Found net devices under 0000:86:00.0: cvl_0_0 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:07.764 Found net devices under 0000:86:00.1: cvl_0_1 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.764 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:22:07.765 00:22:07.765 --- 10.0.0.2 ping statistics --- 00:22:07.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.765 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:07.765 00:22:07.765 --- 10.0.0.1 ping statistics --- 00:22:07.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.765 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2812946 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2812946 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2812946 ']' 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 [2024-11-20 16:15:07.733132] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:07.765 [2024-11-20 16:15:07.733176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.765 [2024-11-20 16:15:07.811971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.765 [2024-11-20 16:15:07.853149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.765 [2024-11-20 16:15:07.853185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.765 [2024-11-20 16:15:07.853192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.765 [2024-11-20 16:15:07.853198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.765 [2024-11-20 16:15:07.853203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.765 [2024-11-20 16:15:07.853801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 [2024-11-20 16:15:07.986463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 null0 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.765 16:15:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0617d05a8165479abd2d4efb6a6caac7 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.765 [2024-11-20 16:15:08.026684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.765 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.766 nvme0n1 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.766 [ 00:22:07.766 { 00:22:07.766 "name": "nvme0n1", 00:22:07.766 "aliases": [ 00:22:07.766 "0617d05a-8165-479a-bd2d-4efb6a6caac7" 00:22:07.766 ], 00:22:07.766 "product_name": "NVMe disk", 00:22:07.766 "block_size": 512, 00:22:07.766 "num_blocks": 2097152, 00:22:07.766 "uuid": "0617d05a-8165-479a-bd2d-4efb6a6caac7", 00:22:07.766 "numa_id": 1, 00:22:07.766 "assigned_rate_limits": { 00:22:07.766 "rw_ios_per_sec": 0, 00:22:07.766 "rw_mbytes_per_sec": 0, 00:22:07.766 "r_mbytes_per_sec": 0, 00:22:07.766 "w_mbytes_per_sec": 0 00:22:07.766 }, 00:22:07.766 "claimed": false, 00:22:07.766 "zoned": false, 00:22:07.766 "supported_io_types": { 00:22:07.766 "read": true, 00:22:07.766 "write": true, 00:22:07.766 "unmap": false, 00:22:07.766 "flush": true, 00:22:07.766 "reset": true, 00:22:07.766 "nvme_admin": true, 00:22:07.766 "nvme_io": true, 00:22:07.766 "nvme_io_md": false, 00:22:07.766 "write_zeroes": true, 00:22:07.766 "zcopy": false, 00:22:07.766 "get_zone_info": false, 00:22:07.766 "zone_management": false, 00:22:07.766 "zone_append": false, 00:22:07.766 "compare": true, 00:22:07.766 "compare_and_write": true, 00:22:07.766 "abort": true, 00:22:07.766 "seek_hole": false, 00:22:07.766 "seek_data": false, 00:22:07.766 "copy": true, 00:22:07.766 "nvme_iov_md": false 00:22:07.766 }, 00:22:07.766 "memory_domains": [ 00:22:07.766 { 00:22:07.766 "dma_device_id": "system", 00:22:07.766 "dma_device_type": 1 00:22:07.766 } 00:22:07.766 ], 00:22:07.766 "driver_specific": { 00:22:07.766 "nvme": [ 00:22:07.766 { 00:22:07.766 "trid": { 00:22:07.766 "trtype": "TCP", 00:22:07.766 "adrfam": "IPv4", 00:22:07.766 "traddr": "10.0.0.2", 00:22:07.766 "trsvcid": "4420", 00:22:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:07.766 }, 00:22:07.766 "ctrlr_data": { 00:22:07.766 "cntlid": 1, 00:22:07.766 "vendor_id": "0x8086", 00:22:07.766 "model_number": "SPDK bdev Controller", 00:22:07.766 "serial_number": "00000000000000000000", 00:22:07.766 "firmware_revision": "25.01", 00:22:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:07.766 "oacs": { 00:22:07.766 "security": 0, 00:22:07.766 "format": 0, 00:22:07.766 "firmware": 0, 00:22:07.766 "ns_manage": 0 00:22:07.766 }, 00:22:07.766 "multi_ctrlr": true, 00:22:07.766 "ana_reporting": false 00:22:07.766 }, 00:22:07.766 "vs": { 00:22:07.766 "nvme_version": "1.3" 00:22:07.766 }, 00:22:07.766 "ns_data": { 00:22:07.766 "id": 1, 00:22:07.766 "can_share": true 00:22:07.766 } 00:22:07.766 } 00:22:07.766 ], 00:22:07.766 "mp_policy": "active_passive" 00:22:07.766 } 00:22:07.766 } 00:22:07.766 ] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.766 [2024-11-20 16:15:08.275400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:07.766 [2024-11-20 16:15:08.275464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1113900 (9): Bad file descriptor 00:22:07.766 [2024-11-20 16:15:08.409028] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.766 [ 00:22:07.766 { 00:22:07.766 "name": "nvme0n1", 00:22:07.766 "aliases": [ 00:22:07.766 "0617d05a-8165-479a-bd2d-4efb6a6caac7" 00:22:07.766 ], 00:22:07.766 "product_name": "NVMe disk", 00:22:07.766 "block_size": 512, 00:22:07.766 "num_blocks": 2097152, 00:22:07.766 "uuid": "0617d05a-8165-479a-bd2d-4efb6a6caac7", 00:22:07.766 "numa_id": 1, 00:22:07.766 "assigned_rate_limits": { 00:22:07.766 "rw_ios_per_sec": 0, 00:22:07.766 "rw_mbytes_per_sec": 0, 00:22:07.766 "r_mbytes_per_sec": 0, 00:22:07.766 "w_mbytes_per_sec": 0 00:22:07.766 }, 00:22:07.766 "claimed": false, 00:22:07.766 "zoned": false, 00:22:07.766 "supported_io_types": { 00:22:07.766 "read": true, 00:22:07.766 "write": true, 00:22:07.766 "unmap": false, 00:22:07.766 "flush": true, 00:22:07.766 "reset": true, 00:22:07.766 "nvme_admin": true, 00:22:07.766 "nvme_io": true, 00:22:07.766 "nvme_io_md": false, 00:22:07.766 "write_zeroes": true, 00:22:07.766 "zcopy": false, 00:22:07.766 "get_zone_info": false, 00:22:07.766 "zone_management": false, 00:22:07.766 "zone_append": false, 00:22:07.766 "compare": true, 00:22:07.766 "compare_and_write": true, 00:22:07.766 "abort": true, 00:22:07.766 "seek_hole": false, 00:22:07.766 "seek_data": false, 00:22:07.766 "copy": true, 00:22:07.766 "nvme_iov_md": false 00:22:07.766 }, 00:22:07.766 "memory_domains": [ 00:22:07.766 { 00:22:07.766 "dma_device_id": "system", 00:22:07.766 "dma_device_type": 1 00:22:07.766 } 00:22:07.766 ], 00:22:07.766 "driver_specific": { 00:22:07.766 "nvme": [ 00:22:07.766 { 00:22:07.766 "trid": { 00:22:07.766 "trtype": "TCP", 00:22:07.766 "adrfam": "IPv4", 00:22:07.766 "traddr": "10.0.0.2", 00:22:07.766 "trsvcid": "4420", 00:22:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:07.766 }, 00:22:07.766 "ctrlr_data": { 00:22:07.766 "cntlid": 2, 00:22:07.766 "vendor_id": "0x8086", 00:22:07.766 "model_number": "SPDK bdev Controller", 00:22:07.766 "serial_number": "00000000000000000000", 00:22:07.766 "firmware_revision": "25.01", 00:22:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:07.766 "oacs": { 00:22:07.766 "security": 0, 00:22:07.766 "format": 0, 00:22:07.766 "firmware": 0, 00:22:07.766 "ns_manage": 0 00:22:07.766 }, 00:22:07.766 "multi_ctrlr": true, 00:22:07.766 "ana_reporting": false 00:22:07.766 }, 00:22:07.766 "vs": { 00:22:07.766 "nvme_version": "1.3" 00:22:07.766 }, 00:22:07.766 "ns_data": { 00:22:07.766 "id": 1, 00:22:07.766 "can_share": true 00:22:07.766 } 00:22:07.766 } 00:22:07.766 ], 00:22:07.766 "mp_policy": "active_passive" 00:22:07.766 } 00:22:07.766 } 00:22:07.766 ] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WakvFqourH 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WakvFqourH 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.WakvFqourH 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.766 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 [2024-11-20 16:15:08.471995] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.767 [2024-11-20 16:15:08.472095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 [2024-11-20 16:15:08.488048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.767 nvme0n1 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 [ 00:22:07.767 { 00:22:07.767 "name": "nvme0n1", 00:22:07.767 "aliases": [ 00:22:07.767 "0617d05a-8165-479a-bd2d-4efb6a6caac7" 00:22:07.767 ], 00:22:07.767 "product_name": "NVMe disk", 00:22:07.767 "block_size": 512, 00:22:07.767 "num_blocks": 2097152, 00:22:07.767 "uuid": "0617d05a-8165-479a-bd2d-4efb6a6caac7", 00:22:07.767 "numa_id": 1, 00:22:07.767 "assigned_rate_limits": { 00:22:07.767 "rw_ios_per_sec": 0, 00:22:07.767 "rw_mbytes_per_sec": 0, 00:22:07.767 "r_mbytes_per_sec": 0, 00:22:07.767 "w_mbytes_per_sec": 0 00:22:07.767 }, 00:22:07.767 "claimed": false, 00:22:07.767 "zoned": false, 00:22:07.767 "supported_io_types": { 00:22:07.767 "read": true, 00:22:07.767 "write": true, 00:22:07.767 "unmap": false, 00:22:07.767 "flush": true, 00:22:07.767 "reset": true, 00:22:07.767 "nvme_admin": true, 00:22:07.767 "nvme_io": true, 00:22:07.767 "nvme_io_md": false, 00:22:07.767 "write_zeroes": true, 00:22:07.767 "zcopy": false, 00:22:07.767 "get_zone_info": false, 00:22:07.767 "zone_management": false, 00:22:07.767 "zone_append": false, 00:22:07.767 "compare": true, 00:22:07.767 "compare_and_write": true, 00:22:07.767 "abort": true, 00:22:07.767 "seek_hole": false, 00:22:07.767 "seek_data": false, 00:22:07.767 "copy": true, 00:22:07.767 "nvme_iov_md": false 00:22:07.767 }, 00:22:07.767 "memory_domains": [ 00:22:07.767 { 00:22:07.767 "dma_device_id": "system", 00:22:07.767 "dma_device_type": 1 00:22:07.767 } 00:22:07.767 ], 00:22:07.767 "driver_specific": { 00:22:07.767 "nvme": [ 00:22:07.767 { 00:22:07.767 "trid": { 00:22:07.767 "trtype": "TCP", 00:22:07.767 "adrfam": "IPv4", 00:22:07.767 "traddr": "10.0.0.2", 00:22:07.767 "trsvcid": "4421", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:07.767 }, 00:22:07.767 "ctrlr_data": { 00:22:07.767 "cntlid": 3, 00:22:07.767 "vendor_id": "0x8086", 00:22:07.767 "model_number": "SPDK bdev Controller", 00:22:07.767 "serial_number": "00000000000000000000", 00:22:07.767 "firmware_revision": "25.01", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:07.767 "oacs": { 00:22:07.767 "security": 0, 00:22:07.767 "format": 0, 00:22:07.767 "firmware": 0, 00:22:07.767 "ns_manage": 0 00:22:07.767 }, 00:22:07.767 "multi_ctrlr": true, 00:22:07.767 "ana_reporting": false 00:22:07.767 }, 00:22:07.767 "vs": { 00:22:07.767 "nvme_version": "1.3" 00:22:07.767 }, 00:22:07.767 "ns_data": { 00:22:07.767 "id": 1, 00:22:07.767 "can_share": true 00:22:07.767 } 00:22:07.767 } 00:22:07.767 ], 00:22:07.767 "mp_policy": "active_passive" 00:22:07.767 } 00:22:07.767 } 00:22:07.767 ] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.WakvFqourH 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.767 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.027 rmmod nvme_tcp 00:22:08.027 rmmod nvme_fabrics 00:22:08.027 rmmod nvme_keyring 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2812946 ']' 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2812946 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2812946 ']' 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2812946 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812946 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812946' 00:22:08.027 killing process with pid 2812946 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2812946 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2812946 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.027 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.287 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.287 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.287 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.287 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.287 16:15:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.203 00:22:10.203 real 0m9.382s 00:22:10.203 user 0m3.011s 00:22:10.203 sys 0m4.763s 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:10.203 ************************************ 00:22:10.203 END TEST nvmf_async_init 00:22:10.203 ************************************ 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.203 16:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.203 ************************************ 00:22:10.203 START TEST dma 00:22:10.203 ************************************ 00:22:10.203 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:10.463 * Looking for test storage... 00:22:10.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.464 --rc genhtml_branch_coverage=1 00:22:10.464 --rc genhtml_function_coverage=1 00:22:10.464 --rc genhtml_legend=1 00:22:10.464 --rc geninfo_all_blocks=1 00:22:10.464 --rc geninfo_unexecuted_blocks=1 00:22:10.464 00:22:10.464 ' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.464 --rc genhtml_branch_coverage=1 00:22:10.464 --rc genhtml_function_coverage=1 00:22:10.464 --rc genhtml_legend=1 00:22:10.464 --rc geninfo_all_blocks=1 00:22:10.464 --rc geninfo_unexecuted_blocks=1 00:22:10.464 00:22:10.464 ' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.464 --rc genhtml_branch_coverage=1 00:22:10.464 --rc genhtml_function_coverage=1 00:22:10.464 --rc genhtml_legend=1 00:22:10.464 --rc geninfo_all_blocks=1 00:22:10.464 --rc geninfo_unexecuted_blocks=1 00:22:10.464 00:22:10.464 ' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.464 --rc genhtml_branch_coverage=1 00:22:10.464 --rc genhtml_function_coverage=1 00:22:10.464 --rc genhtml_legend=1 00:22:10.464 --rc geninfo_all_blocks=1 00:22:10.464 --rc geninfo_unexecuted_blocks=1 00:22:10.464 00:22:10.464 ' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:10.464 00:22:10.464 real 0m0.211s 00:22:10.464 user 0m0.129s 00:22:10.464 sys 0m0.096s 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:10.464 ************************************ 00:22:10.464 END TEST dma 00:22:10.464 ************************************ 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.464 16:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.465 ************************************ 00:22:10.465 START TEST nvmf_identify 00:22:10.465 ************************************ 00:22:10.465 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:10.725 * Looking for test storage... 00:22:10.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.725 --rc genhtml_branch_coverage=1 00:22:10.725 --rc genhtml_function_coverage=1 00:22:10.725 --rc genhtml_legend=1 00:22:10.725 --rc geninfo_all_blocks=1 00:22:10.725 --rc geninfo_unexecuted_blocks=1 00:22:10.725 00:22:10.725 ' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.725 --rc genhtml_branch_coverage=1 00:22:10.725 --rc genhtml_function_coverage=1 00:22:10.725 --rc genhtml_legend=1 00:22:10.725 --rc geninfo_all_blocks=1 00:22:10.725 --rc geninfo_unexecuted_blocks=1 00:22:10.725 00:22:10.725 ' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.725 --rc genhtml_branch_coverage=1 00:22:10.725 --rc genhtml_function_coverage=1 00:22:10.725 --rc genhtml_legend=1 00:22:10.725 --rc geninfo_all_blocks=1 00:22:10.725 --rc geninfo_unexecuted_blocks=1 00:22:10.725 00:22:10.725 ' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.725 --rc genhtml_branch_coverage=1 00:22:10.725 --rc genhtml_function_coverage=1 00:22:10.725 --rc genhtml_legend=1 00:22:10.725 --rc geninfo_all_blocks=1 00:22:10.725 --rc geninfo_unexecuted_blocks=1 00:22:10.725 00:22:10.725 ' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.725 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.726 16:15:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.298 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.298 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.298 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.298 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:17.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:17.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:17.299 Found net devices under 0000:86:00.0: cvl_0_0 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:17.299 Found net devices under 0000:86:00.1: cvl_0_1 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:22:17.299 00:22:17.299 --- 10.0.0.2 ping statistics --- 00:22:17.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.299 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:22:17.299 00:22:17.299 --- 10.0.0.1 ping statistics --- 00:22:17.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.299 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2817158 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2817158 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2817158 ']' 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.299 16:15:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.299 [2024-11-20 16:15:17.495460] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:17.300 [2024-11-20 16:15:17.495514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.300 [2024-11-20 16:15:17.581334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.300 [2024-11-20 16:15:17.623666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.300 [2024-11-20 16:15:17.623710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.300 [2024-11-20 16:15:17.623717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.300 [2024-11-20 16:15:17.623723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.300 [2024-11-20 16:15:17.623728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.300 [2024-11-20 16:15:17.625241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.300 [2024-11-20 16:15:17.625349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.300 [2024-11-20 16:15:17.625447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.300 [2024-11-20 16:15:17.625456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.557 [2024-11-20 16:15:18.345439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.557 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.817 Malloc0 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.817 [2024-11-20 16:15:18.450111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.817 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.817 [ 00:22:17.817 { 00:22:17.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:17.817 "subtype": "Discovery", 00:22:17.817 "listen_addresses": [ 00:22:17.817 { 00:22:17.817 "trtype": "TCP", 00:22:17.817 "adrfam": "IPv4", 00:22:17.817 "traddr": "10.0.0.2", 00:22:17.817 "trsvcid": "4420" 00:22:17.817 } 00:22:17.817 ], 00:22:17.817 "allow_any_host": true, 00:22:17.817 "hosts": [] 00:22:17.817 }, 00:22:17.817 { 00:22:17.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.817 "subtype": "NVMe", 00:22:17.817 "listen_addresses": [ 00:22:17.817 { 00:22:17.817 "trtype": "TCP", 00:22:17.817 "adrfam": "IPv4", 00:22:17.817 "traddr": "10.0.0.2", 00:22:17.817 "trsvcid": "4420" 00:22:17.817 } 00:22:17.817 ], 00:22:17.817 "allow_any_host": true, 00:22:17.817 "hosts": [], 00:22:17.817 "serial_number": "SPDK00000000000001", 00:22:17.817 "model_number": "SPDK bdev Controller", 00:22:17.817 "max_namespaces": 32, 00:22:17.817 "min_cntlid": 1, 00:22:17.817 "max_cntlid": 65519, 00:22:17.817 "namespaces": [ 00:22:17.817 { 00:22:17.817 "nsid": 1, 00:22:17.817 "bdev_name": "Malloc0", 00:22:17.817 "name": "Malloc0", 00:22:17.817 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:17.817 "eui64": "ABCDEF0123456789", 00:22:17.817 "uuid": "82ac9fab-ff7d-4fac-b01f-453c198901a8" 00:22:17.817 } 00:22:17.817 ] 00:22:17.817 } 00:22:17.818 ] 00:22:17.818 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.818 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:17.818 [2024-11-20 16:15:18.501427] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:17.818 [2024-11-20 16:15:18.501461] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817212 ] 00:22:17.818 [2024-11-20 16:15:18.542945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:17.818 [2024-11-20 16:15:18.547000] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:17.818 [2024-11-20 16:15:18.547005] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:17.818 [2024-11-20 16:15:18.547020] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:17.818 [2024-11-20 16:15:18.547030] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:17.818 [2024-11-20 16:15:18.547609] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:17.818 [2024-11-20 16:15:18.547639] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e3f690 0 00:22:17.818 [2024-11-20 16:15:18.561964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:17.818 [2024-11-20 16:15:18.561980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:17.818 [2024-11-20 16:15:18.561984] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:17.818 [2024-11-20 16:15:18.561987] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:17.818 [2024-11-20 16:15:18.562021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.562026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.562030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.562042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:17.818 [2024-11-20 16:15:18.562061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.569959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.569968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.569971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.569979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.569988] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:17.818 [2024-11-20 16:15:18.569994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:17.818 [2024-11-20 16:15:18.569999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:17.818 [2024-11-20 16:15:18.570011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.570025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.818 [2024-11-20 16:15:18.570037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.570202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.570208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.570211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.570220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:17.818 [2024-11-20 16:15:18.570226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:17.818 [2024-11-20 16:15:18.570233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.570245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.818 [2024-11-20 16:15:18.570255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.570320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.570326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.570329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.570337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:17.818 [2024-11-20 16:15:18.570344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:17.818 [2024-11-20 16:15:18.570349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.570361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.818 [2024-11-20 16:15:18.570372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.570432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.570438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.570441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.570451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:17.818 [2024-11-20 16:15:18.570460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.570472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.818 [2024-11-20 16:15:18.570482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.570554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.570560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.570563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.570571] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:17.818 [2024-11-20 16:15:18.570575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:17.818 [2024-11-20 16:15:18.570581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:17.818 [2024-11-20 16:15:18.570689] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:17.818 [2024-11-20 16:15:18.570694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:17.818 [2024-11-20 16:15:18.570701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.570713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.818 [2024-11-20 16:15:18.570723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.570804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.570810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.570813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.570820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:17.818 [2024-11-20 16:15:18.570828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.818 [2024-11-20 16:15:18.570841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.818 [2024-11-20 16:15:18.570850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.818 [2024-11-20 16:15:18.570921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.818 [2024-11-20 16:15:18.570927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.818 [2024-11-20 16:15:18.570932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.818 [2024-11-20 16:15:18.570936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.818 [2024-11-20 16:15:18.570939] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:17.818 [2024-11-20 16:15:18.570944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:17.818 [2024-11-20 16:15:18.570955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:17.819 [2024-11-20 16:15:18.570965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:17.819 [2024-11-20 16:15:18.570973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.570977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.570982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.819 [2024-11-20 16:15:18.570993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.819 [2024-11-20 16:15:18.571083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:17.819 [2024-11-20 16:15:18.571088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:17.819 [2024-11-20 16:15:18.571092] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571095] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3f690): datao=0, datal=4096, cccid=0 00:22:17.819 [2024-11-20 16:15:18.571099] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea1100) on tqpair(0x1e3f690): expected_datao=0, payload_size=4096 00:22:17.819 [2024-11-20 16:15:18.571103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571116] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571120] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.819 [2024-11-20 16:15:18.571160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.819 [2024-11-20 16:15:18.571163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.819 [2024-11-20 16:15:18.571173] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:17.819 [2024-11-20 16:15:18.571178] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:17.819 [2024-11-20 16:15:18.571181] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:17.819 [2024-11-20 16:15:18.571189] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:17.819 [2024-11-20 16:15:18.571193] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:17.819 [2024-11-20 16:15:18.571197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:17.819 [2024-11-20 16:15:18.571206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:17.819 [2024-11-20 16:15:18.571212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:17.819 [2024-11-20 16:15:18.571237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.819 [2024-11-20 16:15:18.571303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.819 [2024-11-20 16:15:18.571309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.819 [2024-11-20 16:15:18.571312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:17.819 [2024-11-20 16:15:18.571332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.819 [2024-11-20 16:15:18.571349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.819 [2024-11-20 16:15:18.571365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.819 [2024-11-20 16:15:18.571380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.819 [2024-11-20 16:15:18.571395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:17.819 [2024-11-20 16:15:18.571403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:17.819 [2024-11-20 16:15:18.571408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.819 [2024-11-20 16:15:18.571428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1100, cid 0, qid 0 00:22:17.819 [2024-11-20 16:15:18.571432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1280, cid 1, qid 0 00:22:17.819 [2024-11-20 16:15:18.571436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1400, cid 2, qid 0 00:22:17.819 [2024-11-20 16:15:18.571440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:17.819 [2024-11-20 16:15:18.571444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1700, cid 4, qid 0 00:22:17.819 [2024-11-20 16:15:18.571539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.819 [2024-11-20 16:15:18.571545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.819 [2024-11-20 16:15:18.571548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1700) on tqpair=0x1e3f690 00:22:17.819 [2024-11-20 16:15:18.571558] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:17.819 [2024-11-20 16:15:18.571562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:17.819 [2024-11-20 16:15:18.571571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.819 [2024-11-20 16:15:18.571591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1700, cid 4, qid 0 00:22:17.819 [2024-11-20 16:15:18.571669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:17.819 [2024-11-20 16:15:18.571675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:17.819 [2024-11-20 16:15:18.571678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571681] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3f690): datao=0, datal=4096, cccid=4 00:22:17.819 [2024-11-20 16:15:18.571685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea1700) on tqpair(0x1e3f690): expected_datao=0, payload_size=4096 00:22:17.819 [2024-11-20 16:15:18.571689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571694] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571697] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.819 [2024-11-20 16:15:18.571716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.819 [2024-11-20 16:15:18.571719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1700) on tqpair=0x1e3f690 00:22:17.819 [2024-11-20 16:15:18.571733] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:17.819 [2024-11-20 16:15:18.571753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.819 [2024-11-20 16:15:18.571769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e3f690) 00:22:17.819 [2024-11-20 16:15:18.571780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.819 [2024-11-20 16:15:18.571794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1700, cid 4, qid 0 00:22:17.819 [2024-11-20 16:15:18.571798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1880, cid 5, qid 0 00:22:17.819 [2024-11-20 16:15:18.571905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:17.819 [2024-11-20 16:15:18.571911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:17.819 [2024-11-20 16:15:18.571914] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571917] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3f690): datao=0, datal=1024, cccid=4 00:22:17.819 [2024-11-20 16:15:18.571923] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea1700) on tqpair(0x1e3f690): expected_datao=0, payload_size=1024 00:22:17.819 [2024-11-20 16:15:18.571927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571932] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571935] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:17.819 [2024-11-20 16:15:18.571940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.820 [2024-11-20 16:15:18.571944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.820 [2024-11-20 16:15:18.571953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.571956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1880) on tqpair=0x1e3f690 00:22:17.820 [2024-11-20 16:15:18.613080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.820 [2024-11-20 16:15:18.613093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.820 [2024-11-20 16:15:18.613096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1700) on tqpair=0x1e3f690 00:22:17.820 [2024-11-20 16:15:18.613112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3f690) 00:22:17.820 [2024-11-20 16:15:18.613124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.820 [2024-11-20 16:15:18.613141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1700, cid 4, qid 0 00:22:17.820 [2024-11-20 16:15:18.613245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:17.820 [2024-11-20 16:15:18.613250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:17.820 [2024-11-20 16:15:18.613254] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613257] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3f690): datao=0, datal=3072, cccid=4 00:22:17.820 [2024-11-20 16:15:18.613261] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea1700) on tqpair(0x1e3f690): expected_datao=0, payload_size=3072 00:22:17.820 [2024-11-20 16:15:18.613265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613274] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:17.820 [2024-11-20 16:15:18.613297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:17.820 [2024-11-20 16:15:18.613300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1700) on tqpair=0x1e3f690 00:22:17.820 [2024-11-20 16:15:18.613312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3f690) 00:22:17.820 [2024-11-20 16:15:18.613321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.820 [2024-11-20 16:15:18.613335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1700, cid 4, qid 0 00:22:17.820 [2024-11-20 16:15:18.613407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:17.820 [2024-11-20 16:15:18.613412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:17.820 [2024-11-20 16:15:18.613415] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613418] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3f690): datao=0, datal=8, cccid=4 00:22:17.820 [2024-11-20 16:15:18.613425] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea1700) on tqpair(0x1e3f690): expected_datao=0, payload_size=8 00:22:17.820 [2024-11-20 16:15:18.613429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613435] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:17.820 [2024-11-20 16:15:18.613438] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.084 [2024-11-20 16:15:18.655116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.084 [2024-11-20 16:15:18.655130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.084 [2024-11-20 16:15:18.655134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.084 [2024-11-20 16:15:18.655138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1700) on tqpair=0x1e3f690 00:22:18.084 ===================================================== 00:22:18.084 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:18.084 ===================================================== 00:22:18.084 Controller Capabilities/Features 00:22:18.084 ================================ 00:22:18.084 Vendor ID: 0000 00:22:18.084 Subsystem Vendor ID: 0000 00:22:18.084 Serial Number: .................... 00:22:18.084 Model Number: ........................................ 00:22:18.084 Firmware Version: 25.01 00:22:18.084 Recommended Arb Burst: 0 00:22:18.084 IEEE OUI Identifier: 00 00 00 00:22:18.084 Multi-path I/O 00:22:18.084 May have multiple subsystem ports: No 00:22:18.084 May have multiple controllers: No 00:22:18.084 Associated with SR-IOV VF: No 00:22:18.084 Max Data Transfer Size: 131072 00:22:18.084 Max Number of Namespaces: 0 00:22:18.084 Max Number of I/O Queues: 1024 00:22:18.084 NVMe Specification Version (VS): 1.3 00:22:18.084 NVMe Specification Version (Identify): 1.3 00:22:18.084 Maximum Queue Entries: 128 00:22:18.084 Contiguous Queues Required: Yes 00:22:18.084 Arbitration Mechanisms Supported 00:22:18.084 Weighted Round Robin: Not Supported 00:22:18.084 Vendor Specific: Not Supported 00:22:18.084 Reset Timeout: 15000 ms 00:22:18.084 Doorbell Stride: 4 bytes 00:22:18.084 NVM Subsystem Reset: Not Supported 00:22:18.084 Command Sets Supported 00:22:18.084 NVM Command Set: Supported 00:22:18.084 Boot Partition: Not Supported 00:22:18.084 Memory Page Size Minimum: 4096 bytes 00:22:18.084 Memory Page Size Maximum: 4096 bytes 00:22:18.084 Persistent Memory Region: Not Supported 00:22:18.084 Optional Asynchronous Events Supported 00:22:18.084 Namespace Attribute Notices: Not Supported 00:22:18.084 Firmware Activation Notices: Not Supported 00:22:18.084 ANA Change Notices: Not Supported 00:22:18.084 PLE Aggregate Log Change Notices: Not Supported 00:22:18.084 LBA Status Info Alert Notices: Not Supported 00:22:18.084 EGE Aggregate Log Change Notices: Not Supported 00:22:18.084 Normal NVM Subsystem Shutdown event: Not Supported 00:22:18.085 Zone Descriptor Change Notices: Not Supported 00:22:18.085 Discovery Log Change Notices: Supported 00:22:18.085 Controller Attributes 00:22:18.085 128-bit Host Identifier: Not Supported 00:22:18.085 Non-Operational Permissive Mode: Not Supported 00:22:18.085 NVM Sets: Not Supported 00:22:18.085 Read Recovery Levels: Not Supported 00:22:18.085 Endurance Groups: Not Supported 00:22:18.085 Predictable Latency Mode: Not Supported 00:22:18.085 Traffic Based Keep ALive: Not Supported 00:22:18.085 Namespace Granularity: Not Supported 00:22:18.085 SQ Associations: Not Supported 00:22:18.085 UUID List: Not Supported 00:22:18.085 Multi-Domain Subsystem: Not Supported 00:22:18.085 Fixed Capacity Management: Not Supported 00:22:18.085 Variable Capacity Management: Not Supported 00:22:18.085 Delete Endurance Group: Not Supported 00:22:18.085 Delete NVM Set: Not Supported 00:22:18.085 Extended LBA Formats Supported: Not Supported 00:22:18.085 Flexible Data Placement Supported: Not Supported 00:22:18.085 00:22:18.085 Controller Memory Buffer Support 00:22:18.085 ================================ 00:22:18.085 Supported: No 00:22:18.085 00:22:18.085 Persistent Memory Region Support 00:22:18.085 ================================ 00:22:18.085 Supported: No 00:22:18.085 00:22:18.085 Admin Command Set Attributes 00:22:18.085 ============================ 00:22:18.085 Security Send/Receive: Not Supported 00:22:18.085 Format NVM: Not Supported 00:22:18.085 Firmware Activate/Download: Not Supported 00:22:18.085 Namespace Management: Not Supported 00:22:18.085 Device Self-Test: Not Supported 00:22:18.085 Directives: Not Supported 00:22:18.085 NVMe-MI: Not Supported 00:22:18.085 Virtualization Management: Not Supported 00:22:18.085 Doorbell Buffer Config: Not Supported 00:22:18.085 Get LBA Status Capability: Not Supported 00:22:18.085 Command & Feature Lockdown Capability: Not Supported 00:22:18.085 Abort Command Limit: 1 00:22:18.085 Async Event Request Limit: 4 00:22:18.085 Number of Firmware Slots: N/A 00:22:18.085 Firmware Slot 1 Read-Only: N/A 00:22:18.085 Firmware Activation Without Reset: N/A 00:22:18.085 Multiple Update Detection Support: N/A 00:22:18.085 Firmware Update Granularity: No Information Provided 00:22:18.085 Per-Namespace SMART Log: No 00:22:18.085 Asymmetric Namespace Access Log Page: Not Supported 00:22:18.085 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:18.085 Command Effects Log Page: Not Supported 00:22:18.085 Get Log Page Extended Data: Supported 00:22:18.085 Telemetry Log Pages: Not Supported 00:22:18.085 Persistent Event Log Pages: Not Supported 00:22:18.085 Supported Log Pages Log Page: May Support 00:22:18.085 Commands Supported & Effects Log Page: Not Supported 00:22:18.085 Feature Identifiers & Effects Log Page:May Support 00:22:18.085 NVMe-MI Commands & Effects Log Page: May Support 00:22:18.085 Data Area 4 for Telemetry Log: Not Supported 00:22:18.085 Error Log Page Entries Supported: 128 00:22:18.085 Keep Alive: Not Supported 00:22:18.085 00:22:18.085 NVM Command Set Attributes 00:22:18.085 ========================== 00:22:18.085 Submission Queue Entry Size 00:22:18.085 Max: 1 00:22:18.085 Min: 1 00:22:18.085 Completion Queue Entry Size 00:22:18.085 Max: 1 00:22:18.085 Min: 1 00:22:18.085 Number of Namespaces: 0 00:22:18.085 Compare Command: Not Supported 00:22:18.085 Write Uncorrectable Command: Not Supported 00:22:18.085 Dataset Management Command: Not Supported 00:22:18.085 Write Zeroes Command: Not Supported 00:22:18.085 Set Features Save Field: Not Supported 00:22:18.085 Reservations: Not Supported 00:22:18.085 Timestamp: Not Supported 00:22:18.085 Copy: Not Supported 00:22:18.085 Volatile Write Cache: Not Present 00:22:18.085 Atomic Write Unit (Normal): 1 00:22:18.085 Atomic Write Unit (PFail): 1 00:22:18.085 Atomic Compare & Write Unit: 1 00:22:18.085 Fused Compare & Write: Supported 00:22:18.085 Scatter-Gather List 00:22:18.085 SGL Command Set: Supported 00:22:18.085 SGL Keyed: Supported 00:22:18.085 SGL Bit Bucket Descriptor: Not Supported 00:22:18.085 SGL Metadata Pointer: Not Supported 00:22:18.085 Oversized SGL: Not Supported 00:22:18.085 SGL Metadata Address: Not Supported 00:22:18.085 SGL Offset: Supported 00:22:18.085 Transport SGL Data Block: Not Supported 00:22:18.085 Replay Protected Memory Block: Not Supported 00:22:18.085 00:22:18.085 Firmware Slot Information 00:22:18.085 ========================= 00:22:18.085 Active slot: 0 00:22:18.085 00:22:18.085 00:22:18.085 Error Log 00:22:18.085 ========= 00:22:18.085 00:22:18.085 Active Namespaces 00:22:18.085 ================= 00:22:18.085 Discovery Log Page 00:22:18.085 ================== 00:22:18.085 Generation Counter: 2 00:22:18.085 Number of Records: 2 00:22:18.085 Record Format: 0 00:22:18.085 00:22:18.085 Discovery Log Entry 0 00:22:18.085 ---------------------- 00:22:18.085 Transport Type: 3 (TCP) 00:22:18.085 Address Family: 1 (IPv4) 00:22:18.085 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:18.085 Entry Flags: 00:22:18.085 Duplicate Returned Information: 1 00:22:18.085 Explicit Persistent Connection Support for Discovery: 1 00:22:18.085 Transport Requirements: 00:22:18.085 Secure Channel: Not Required 00:22:18.085 Port ID: 0 (0x0000) 00:22:18.085 Controller ID: 65535 (0xffff) 00:22:18.085 Admin Max SQ Size: 128 00:22:18.085 Transport Service Identifier: 4420 00:22:18.085 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:18.085 Transport Address: 10.0.0.2 00:22:18.085 Discovery Log Entry 1 00:22:18.085 ---------------------- 00:22:18.085 Transport Type: 3 (TCP) 00:22:18.085 Address Family: 1 (IPv4) 00:22:18.085 Subsystem Type: 2 (NVM Subsystem) 00:22:18.085 Entry Flags: 00:22:18.085 Duplicate Returned Information: 0 00:22:18.085 Explicit Persistent Connection Support for Discovery: 0 00:22:18.085 Transport Requirements: 00:22:18.085 Secure Channel: Not Required 00:22:18.085 Port ID: 0 (0x0000) 00:22:18.085 Controller ID: 65535 (0xffff) 00:22:18.085 Admin Max SQ Size: 128 00:22:18.085 Transport Service Identifier: 4420 00:22:18.085 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:18.085 Transport Address: 10.0.0.2 [2024-11-20 16:15:18.655223] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:18.085 [2024-11-20 16:15:18.655234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1100) on tqpair=0x1e3f690 00:22:18.085 [2024-11-20 16:15:18.655240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.085 [2024-11-20 16:15:18.655245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1280) on tqpair=0x1e3f690 00:22:18.085 [2024-11-20 16:15:18.655249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.086 [2024-11-20 16:15:18.655254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1400) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.086 [2024-11-20 16:15:18.655262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.086 [2024-11-20 16:15:18.655276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.655290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.655304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.655390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.655396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.655399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.655421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.655434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.655528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.655534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.655537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655547] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:18.086 [2024-11-20 16:15:18.655551] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:18.086 [2024-11-20 16:15:18.655559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.655572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.655582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.655643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.655648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.655651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.655676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.655685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.655745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.655751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.655754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.655778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.655788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.655863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.655869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.655872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.655883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.655896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.655906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.655982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.655990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.655995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.655999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.656008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.656011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.656014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.656020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.656031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.656101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.656108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.656112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.656115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.086 [2024-11-20 16:15:18.656123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.656127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.656130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.086 [2024-11-20 16:15:18.656136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.086 [2024-11-20 16:15:18.656146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.086 [2024-11-20 16:15:18.656217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.086 [2024-11-20 16:15:18.656223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.086 [2024-11-20 16:15:18.656226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.086 [2024-11-20 16:15:18.656229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.656336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.656342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.656345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.656451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.656457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.656460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.656563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.656569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.656571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.656669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.656677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.656680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.656787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.656793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.656796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.656904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.656910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.656913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.656927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.656934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.656939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.656957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.657021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.657027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.657030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.657042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.657055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.657064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.657140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.657145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.657148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.657160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.657173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.657182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.657257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.657262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.657265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.087 [2024-11-20 16:15:18.657277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.087 [2024-11-20 16:15:18.657290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.087 [2024-11-20 16:15:18.657300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.087 [2024-11-20 16:15:18.657392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.087 [2024-11-20 16:15:18.657398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.087 [2024-11-20 16:15:18.657401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.087 [2024-11-20 16:15:18.657406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.657417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.657431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.657441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.657505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.657511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.657515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.657526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.657539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.657548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.657624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.657632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.657635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.657647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.657659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.657670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.657741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.657747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.657750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.657761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.657774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.657784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.657857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.657863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.657866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.657878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.657893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.657902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.657972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.657979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.657982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.657993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.657997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.658006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.658016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.658091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.658097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.658100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.658111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.658124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.658134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.658208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.658215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.658218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.658230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.658242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.658252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.658324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.658330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.658333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.088 [2024-11-20 16:15:18.658344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.088 [2024-11-20 16:15:18.658359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.088 [2024-11-20 16:15:18.658369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.088 [2024-11-20 16:15:18.658429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.088 [2024-11-20 16:15:18.658435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.088 [2024-11-20 16:15:18.658439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.088 [2024-11-20 16:15:18.658442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.658451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.089 [2024-11-20 16:15:18.658464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.089 [2024-11-20 16:15:18.658474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.089 [2024-11-20 16:15:18.658537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.089 [2024-11-20 16:15:18.658544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.089 [2024-11-20 16:15:18.658547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.658558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.089 [2024-11-20 16:15:18.658571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.089 [2024-11-20 16:15:18.658581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.089 [2024-11-20 16:15:18.658656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.089 [2024-11-20 16:15:18.658662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.089 [2024-11-20 16:15:18.658665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.658676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.089 [2024-11-20 16:15:18.658689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.089 [2024-11-20 16:15:18.658698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.089 [2024-11-20 16:15:18.658772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.089 [2024-11-20 16:15:18.658778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.089 [2024-11-20 16:15:18.658781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.658793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.089 [2024-11-20 16:15:18.658805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.089 [2024-11-20 16:15:18.658817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.089 [2024-11-20 16:15:18.658879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.089 [2024-11-20 16:15:18.658885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.089 [2024-11-20 16:15:18.658888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.658900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.658907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.089 [2024-11-20 16:15:18.658913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.089 [2024-11-20 16:15:18.658923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.089 [2024-11-20 16:15:18.662956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.089 [2024-11-20 16:15:18.662964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.089 [2024-11-20 16:15:18.662967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.662971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.662980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.662984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.662988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3f690) 00:22:18.089 [2024-11-20 16:15:18.662993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.089 [2024-11-20 16:15:18.663005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea1580, cid 3, qid 0 00:22:18.089 [2024-11-20 16:15:18.663158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.089 [2024-11-20 16:15:18.663164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.089 [2024-11-20 16:15:18.663167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.663170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea1580) on tqpair=0x1e3f690 00:22:18.089 [2024-11-20 16:15:18.663177] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:18.089 00:22:18.089 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:18.089 [2024-11-20 16:15:18.701944] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:18.089 [2024-11-20 16:15:18.701981] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817321 ] 00:22:18.089 [2024-11-20 16:15:18.741584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:18.089 [2024-11-20 16:15:18.741628] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:18.089 [2024-11-20 16:15:18.741633] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:18.089 [2024-11-20 16:15:18.741648] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:18.089 [2024-11-20 16:15:18.741657] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:18.089 [2024-11-20 16:15:18.745127] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:18.089 [2024-11-20 16:15:18.745157] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcd5690 0 00:22:18.089 [2024-11-20 16:15:18.752960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:18.089 [2024-11-20 16:15:18.752973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:18.089 [2024-11-20 16:15:18.752977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:18.089 [2024-11-20 16:15:18.752980] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:18.089 [2024-11-20 16:15:18.753007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.753012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.089 [2024-11-20 16:15:18.753015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.089 [2024-11-20 16:15:18.753026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:18.090 [2024-11-20 16:15:18.753043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.090 [2024-11-20 16:15:18.760957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.090 [2024-11-20 16:15:18.760965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.090 [2024-11-20 16:15:18.760968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.760972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.090 [2024-11-20 16:15:18.760982] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:18.090 [2024-11-20 16:15:18.760988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:18.090 [2024-11-20 16:15:18.760993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:18.090 [2024-11-20 16:15:18.761004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.090 [2024-11-20 16:15:18.761019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.090 [2024-11-20 16:15:18.761032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.090 [2024-11-20 16:15:18.761194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.090 [2024-11-20 16:15:18.761201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.090 [2024-11-20 16:15:18.761204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.090 [2024-11-20 16:15:18.761212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:18.090 [2024-11-20 16:15:18.761218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:18.090 [2024-11-20 16:15:18.761225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.090 [2024-11-20 16:15:18.761238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.090 [2024-11-20 16:15:18.761248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.090 [2024-11-20 16:15:18.761311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.090 [2024-11-20 16:15:18.761317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.090 [2024-11-20 16:15:18.761320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.090 [2024-11-20 16:15:18.761328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:18.090 [2024-11-20 16:15:18.761335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:18.090 [2024-11-20 16:15:18.761341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.090 [2024-11-20 16:15:18.761353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.090 [2024-11-20 16:15:18.761363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.090 [2024-11-20 16:15:18.761422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.090 [2024-11-20 16:15:18.761428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.090 [2024-11-20 16:15:18.761431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.090 [2024-11-20 16:15:18.761438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:18.090 [2024-11-20 16:15:18.761447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.090 [2024-11-20 16:15:18.761459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.090 [2024-11-20 16:15:18.761469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.090 [2024-11-20 16:15:18.761528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.090 [2024-11-20 16:15:18.761533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.090 [2024-11-20 16:15:18.761536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.090 [2024-11-20 16:15:18.761544] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:18.090 [2024-11-20 16:15:18.761548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:18.090 [2024-11-20 16:15:18.761555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:18.090 [2024-11-20 16:15:18.761663] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:18.090 [2024-11-20 16:15:18.761667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:18.090 [2024-11-20 16:15:18.761674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.090 [2024-11-20 16:15:18.761687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.090 [2024-11-20 16:15:18.761698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.090 [2024-11-20 16:15:18.761758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.090 [2024-11-20 16:15:18.761764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.090 [2024-11-20 16:15:18.761767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.090 [2024-11-20 16:15:18.761774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:18.090 [2024-11-20 16:15:18.761783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.090 [2024-11-20 16:15:18.761786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.761789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.761795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.091 [2024-11-20 16:15:18.761805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.091 [2024-11-20 16:15:18.761865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.091 [2024-11-20 16:15:18.761871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.091 [2024-11-20 16:15:18.761874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.761877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.091 [2024-11-20 16:15:18.761881] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:18.091 [2024-11-20 16:15:18.761885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.761892] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:18.091 [2024-11-20 16:15:18.761903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.761911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.761915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.761921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.091 [2024-11-20 16:15:18.761930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.091 [2024-11-20 16:15:18.762033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.091 [2024-11-20 16:15:18.762040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.091 [2024-11-20 16:15:18.762043] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762046] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=4096, cccid=0 00:22:18.091 [2024-11-20 16:15:18.762050] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37100) on tqpair(0xcd5690): expected_datao=0, payload_size=4096 00:22:18.091 [2024-11-20 16:15:18.762054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762060] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762064] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.091 [2024-11-20 16:15:18.762084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.091 [2024-11-20 16:15:18.762088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.091 [2024-11-20 16:15:18.762099] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:18.091 [2024-11-20 16:15:18.762103] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:18.091 [2024-11-20 16:15:18.762107] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:18.091 [2024-11-20 16:15:18.762113] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:18.091 [2024-11-20 16:15:18.762117] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:18.091 [2024-11-20 16:15:18.762121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.762130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.762137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.762150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.091 [2024-11-20 16:15:18.762161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.091 [2024-11-20 16:15:18.762224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.091 [2024-11-20 16:15:18.762229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.091 [2024-11-20 16:15:18.762233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.091 [2024-11-20 16:15:18.762242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.762253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.091 [2024-11-20 16:15:18.762259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.762270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.091 [2024-11-20 16:15:18.762275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.762286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.091 [2024-11-20 16:15:18.762291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.762302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.091 [2024-11-20 16:15:18.762308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.762316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.762322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.091 [2024-11-20 16:15:18.762331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.091 [2024-11-20 16:15:18.762342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37100, cid 0, qid 0 00:22:18.091 [2024-11-20 16:15:18.762346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37280, cid 1, qid 0 00:22:18.091 [2024-11-20 16:15:18.762351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37400, cid 2, qid 0 00:22:18.091 [2024-11-20 16:15:18.762355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.091 [2024-11-20 16:15:18.762359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.091 [2024-11-20 16:15:18.762455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.091 [2024-11-20 16:15:18.762461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.091 [2024-11-20 16:15:18.762464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.091 [2024-11-20 16:15:18.762467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.091 [2024-11-20 16:15:18.762473] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:18.091 [2024-11-20 16:15:18.762478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.762485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:18.091 [2024-11-20 16:15:18.762490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.762495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.092 [2024-11-20 16:15:18.762507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.092 [2024-11-20 16:15:18.762517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.092 [2024-11-20 16:15:18.762580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.092 [2024-11-20 16:15:18.762586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.092 [2024-11-20 16:15:18.762589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.092 [2024-11-20 16:15:18.762643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.762653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.762660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.092 [2024-11-20 16:15:18.762670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.092 [2024-11-20 16:15:18.762680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.092 [2024-11-20 16:15:18.762757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.092 [2024-11-20 16:15:18.762763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.092 [2024-11-20 16:15:18.762766] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=4096, cccid=4 00:22:18.092 [2024-11-20 16:15:18.762773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37700) on tqpair(0xcd5690): expected_datao=0, payload_size=4096 00:22:18.092 [2024-11-20 16:15:18.762777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762789] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.762793] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.803106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.092 [2024-11-20 16:15:18.803117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.092 [2024-11-20 16:15:18.803120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.803124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.092 [2024-11-20 16:15:18.803134] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:18.092 [2024-11-20 16:15:18.803143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.803153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.803160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.803164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.092 [2024-11-20 16:15:18.803170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.092 [2024-11-20 16:15:18.803183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.092 [2024-11-20 16:15:18.803269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.092 [2024-11-20 16:15:18.803275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.092 [2024-11-20 16:15:18.803278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.803281] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=4096, cccid=4 00:22:18.092 [2024-11-20 16:15:18.803285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37700) on tqpair(0xcd5690): expected_datao=0, payload_size=4096 00:22:18.092 [2024-11-20 16:15:18.803289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.803301] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.803306] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.845082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.092 [2024-11-20 16:15:18.845092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.092 [2024-11-20 16:15:18.845095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.845098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.092 [2024-11-20 16:15:18.845111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.845120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.845129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.845133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.092 [2024-11-20 16:15:18.845140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.092 [2024-11-20 16:15:18.845152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.092 [2024-11-20 16:15:18.845221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.092 [2024-11-20 16:15:18.845227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.092 [2024-11-20 16:15:18.845230] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.845233] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=4096, cccid=4 00:22:18.092 [2024-11-20 16:15:18.845237] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37700) on tqpair(0xcd5690): expected_datao=0, payload_size=4096 00:22:18.092 [2024-11-20 16:15:18.845241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.845254] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.845258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.890956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.092 [2024-11-20 16:15:18.890965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.092 [2024-11-20 16:15:18.890968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.890971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.092 [2024-11-20 16:15:18.890979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.890987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.890995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.891001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.891006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.891010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.891015] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:18.092 [2024-11-20 16:15:18.891019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:18.092 [2024-11-20 16:15:18.891024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:18.092 [2024-11-20 16:15:18.891036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.891040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.092 [2024-11-20 16:15:18.891047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.092 [2024-11-20 16:15:18.891052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.891055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.891059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd5690) 00:22:18.092 [2024-11-20 16:15:18.891066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.092 [2024-11-20 16:15:18.891080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.092 [2024-11-20 16:15:18.891085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37880, cid 5, qid 0 00:22:18.092 [2024-11-20 16:15:18.891171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.092 [2024-11-20 16:15:18.891177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.092 [2024-11-20 16:15:18.891180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.092 [2024-11-20 16:15:18.891184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.092 [2024-11-20 16:15:18.891190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.092 [2024-11-20 16:15:18.891194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.092 [2024-11-20 16:15:18.891197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37880) on tqpair=0xcd5690 00:22:18.093 [2024-11-20 16:15:18.891208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37880, cid 5, qid 0 00:22:18.093 [2024-11-20 16:15:18.891291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.093 [2024-11-20 16:15:18.891297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.093 [2024-11-20 16:15:18.891300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37880) on tqpair=0xcd5690 00:22:18.093 [2024-11-20 16:15:18.891311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37880, cid 5, qid 0 00:22:18.093 [2024-11-20 16:15:18.891389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.093 [2024-11-20 16:15:18.891395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.093 [2024-11-20 16:15:18.891398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37880) on tqpair=0xcd5690 00:22:18.093 [2024-11-20 16:15:18.891409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37880, cid 5, qid 0 00:22:18.093 [2024-11-20 16:15:18.891492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.093 [2024-11-20 16:15:18.891497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.093 [2024-11-20 16:15:18.891500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37880) on tqpair=0xcd5690 00:22:18.093 [2024-11-20 16:15:18.891516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcd5690) 00:22:18.093 [2024-11-20 16:15:18.891575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.093 [2024-11-20 16:15:18.891585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37880, cid 5, qid 0 00:22:18.093 [2024-11-20 16:15:18.891590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37700, cid 4, qid 0 00:22:18.093 [2024-11-20 16:15:18.891594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37a00, cid 6, qid 0 00:22:18.093 [2024-11-20 16:15:18.891598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37b80, cid 7, qid 0 00:22:18.093 [2024-11-20 16:15:18.891742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.093 [2024-11-20 16:15:18.891749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.093 [2024-11-20 16:15:18.891752] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891755] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=8192, cccid=5 00:22:18.093 [2024-11-20 16:15:18.891759] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37880) on tqpair(0xcd5690): expected_datao=0, payload_size=8192 00:22:18.093 [2024-11-20 16:15:18.891763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891780] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891784] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.093 [2024-11-20 16:15:18.891793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.093 [2024-11-20 16:15:18.891796] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891799] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=512, cccid=4 00:22:18.093 [2024-11-20 16:15:18.891803] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37700) on tqpair(0xcd5690): expected_datao=0, payload_size=512 00:22:18.093 [2024-11-20 16:15:18.891807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891813] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891815] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.093 [2024-11-20 16:15:18.891825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.093 [2024-11-20 16:15:18.891828] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891831] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=512, cccid=6 00:22:18.093 [2024-11-20 16:15:18.891837] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37a00) on tqpair(0xcd5690): expected_datao=0, payload_size=512 00:22:18.093 [2024-11-20 16:15:18.891840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891846] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891849] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:18.093 [2024-11-20 16:15:18.891858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:18.093 [2024-11-20 16:15:18.891861] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891864] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd5690): datao=0, datal=4096, cccid=7 00:22:18.093 [2024-11-20 16:15:18.891868] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd37b80) on tqpair(0xcd5690): expected_datao=0, payload_size=4096 00:22:18.093 [2024-11-20 16:15:18.891872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891878] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891880] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.093 [2024-11-20 16:15:18.891893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.093 [2024-11-20 16:15:18.891896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.093 [2024-11-20 16:15:18.891899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37880) on tqpair=0xcd5690 00:22:18.093 [2024-11-20 16:15:18.891909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.093 [2024-11-20 16:15:18.891914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.093 [2024-11-20 16:15:18.891918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.094 [2024-11-20 16:15:18.891921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37700) on tqpair=0xcd5690 00:22:18.094 [2024-11-20 16:15:18.891929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.094 [2024-11-20 16:15:18.891935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.094 [2024-11-20 16:15:18.891938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.094 [2024-11-20 16:15:18.891941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37a00) on tqpair=0xcd5690 00:22:18.094 [2024-11-20 16:15:18.891951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.094 [2024-11-20 16:15:18.891957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.094 [2024-11-20 16:15:18.891960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.094 [2024-11-20 16:15:18.891963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37b80) on tqpair=0xcd5690 00:22:18.094 ===================================================== 00:22:18.094 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.094 ===================================================== 00:22:18.094 Controller Capabilities/Features 00:22:18.094 ================================ 00:22:18.094 Vendor ID: 8086 00:22:18.094 Subsystem Vendor ID: 8086 00:22:18.094 Serial Number: SPDK00000000000001 00:22:18.094 Model Number: SPDK bdev Controller 00:22:18.094 Firmware Version: 25.01 00:22:18.094 Recommended Arb Burst: 6 00:22:18.094 IEEE OUI Identifier: e4 d2 5c 00:22:18.094 Multi-path I/O 00:22:18.094 May have multiple subsystem ports: Yes 00:22:18.094 May have multiple controllers: Yes 00:22:18.094 Associated with SR-IOV VF: No 00:22:18.094 Max Data Transfer Size: 131072 00:22:18.094 Max Number of Namespaces: 32 00:22:18.094 Max Number of I/O Queues: 127 00:22:18.094 NVMe Specification Version (VS): 1.3 00:22:18.094 NVMe Specification Version (Identify): 1.3 00:22:18.094 Maximum Queue Entries: 128 00:22:18.094 Contiguous Queues Required: Yes 00:22:18.094 Arbitration Mechanisms Supported 00:22:18.094 Weighted Round Robin: Not Supported 00:22:18.094 Vendor Specific: Not Supported 00:22:18.094 Reset Timeout: 15000 ms 00:22:18.094 Doorbell Stride: 4 bytes 00:22:18.094 NVM Subsystem Reset: Not Supported 00:22:18.094 Command Sets Supported 00:22:18.094 NVM Command Set: Supported 00:22:18.094 Boot Partition: Not Supported 00:22:18.094 Memory Page Size Minimum: 4096 bytes 00:22:18.094 Memory Page Size Maximum: 4096 bytes 00:22:18.094 Persistent Memory Region: Not Supported 00:22:18.094 Optional Asynchronous Events Supported 00:22:18.094 Namespace Attribute Notices: Supported 00:22:18.094 Firmware Activation Notices: Not Supported 00:22:18.094 ANA Change Notices: Not Supported 00:22:18.094 PLE Aggregate Log Change Notices: Not Supported 00:22:18.094 LBA Status Info Alert Notices: Not Supported 00:22:18.094 EGE Aggregate Log Change Notices: Not Supported 00:22:18.094 Normal NVM Subsystem Shutdown event: Not Supported 00:22:18.094 Zone Descriptor Change Notices: Not Supported 00:22:18.094 Discovery Log Change Notices: Not Supported 00:22:18.094 Controller Attributes 00:22:18.094 128-bit Host Identifier: Supported 00:22:18.094 Non-Operational Permissive Mode: Not Supported 00:22:18.094 NVM Sets: Not Supported 00:22:18.094 Read Recovery Levels: Not Supported 00:22:18.094 Endurance Groups: Not Supported 00:22:18.094 Predictable Latency Mode: Not Supported 00:22:18.094 Traffic Based Keep ALive: Not Supported 00:22:18.094 Namespace Granularity: Not Supported 00:22:18.094 SQ Associations: Not Supported 00:22:18.094 UUID List: Not Supported 00:22:18.094 Multi-Domain Subsystem: Not Supported 00:22:18.094 Fixed Capacity Management: Not Supported 00:22:18.094 Variable Capacity Management: Not Supported 00:22:18.094 Delete Endurance Group: Not Supported 00:22:18.094 Delete NVM Set: Not Supported 00:22:18.094 Extended LBA Formats Supported: Not Supported 00:22:18.094 Flexible Data Placement Supported: Not Supported 00:22:18.094 00:22:18.094 Controller Memory Buffer Support 00:22:18.094 ================================ 00:22:18.094 Supported: No 00:22:18.094 00:22:18.094 Persistent Memory Region Support 00:22:18.094 ================================ 00:22:18.094 Supported: No 00:22:18.094 00:22:18.094 Admin Command Set Attributes 00:22:18.094 ============================ 00:22:18.094 Security Send/Receive: Not Supported 00:22:18.094 Format NVM: Not Supported 00:22:18.094 Firmware Activate/Download: Not Supported 00:22:18.094 Namespace Management: Not Supported 00:22:18.094 Device Self-Test: Not Supported 00:22:18.094 Directives: Not Supported 00:22:18.094 NVMe-MI: Not Supported 00:22:18.094 Virtualization Management: Not Supported 00:22:18.094 Doorbell Buffer Config: Not Supported 00:22:18.094 Get LBA Status Capability: Not Supported 00:22:18.094 Command & Feature Lockdown Capability: Not Supported 00:22:18.094 Abort Command Limit: 4 00:22:18.094 Async Event Request Limit: 4 00:22:18.094 Number of Firmware Slots: N/A 00:22:18.094 Firmware Slot 1 Read-Only: N/A 00:22:18.094 Firmware Activation Without Reset: N/A 00:22:18.094 Multiple Update Detection Support: N/A 00:22:18.094 Firmware Update Granularity: No Information Provided 00:22:18.094 Per-Namespace SMART Log: No 00:22:18.094 Asymmetric Namespace Access Log Page: Not Supported 00:22:18.094 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:18.094 Command Effects Log Page: Supported 00:22:18.094 Get Log Page Extended Data: Supported 00:22:18.094 Telemetry Log Pages: Not Supported 00:22:18.094 Persistent Event Log Pages: Not Supported 00:22:18.094 Supported Log Pages Log Page: May Support 00:22:18.094 Commands Supported & Effects Log Page: Not Supported 00:22:18.094 Feature Identifiers & Effects Log Page:May Support 00:22:18.094 NVMe-MI Commands & Effects Log Page: May Support 00:22:18.094 Data Area 4 for Telemetry Log: Not Supported 00:22:18.094 Error Log Page Entries Supported: 128 00:22:18.094 Keep Alive: Supported 00:22:18.094 Keep Alive Granularity: 10000 ms 00:22:18.094 00:22:18.094 NVM Command Set Attributes 00:22:18.094 ========================== 00:22:18.094 Submission Queue Entry Size 00:22:18.094 Max: 64 00:22:18.094 Min: 64 00:22:18.094 Completion Queue Entry Size 00:22:18.094 Max: 16 00:22:18.094 Min: 16 00:22:18.094 Number of Namespaces: 32 00:22:18.094 Compare Command: Supported 00:22:18.094 Write Uncorrectable Command: Not Supported 00:22:18.094 Dataset Management Command: Supported 00:22:18.094 Write Zeroes Command: Supported 00:22:18.094 Set Features Save Field: Not Supported 00:22:18.094 Reservations: Supported 00:22:18.094 Timestamp: Not Supported 00:22:18.094 Copy: Supported 00:22:18.094 Volatile Write Cache: Present 00:22:18.094 Atomic Write Unit (Normal): 1 00:22:18.094 Atomic Write Unit (PFail): 1 00:22:18.094 Atomic Compare & Write Unit: 1 00:22:18.094 Fused Compare & Write: Supported 00:22:18.094 Scatter-Gather List 00:22:18.094 SGL Command Set: Supported 00:22:18.094 SGL Keyed: Supported 00:22:18.094 SGL Bit Bucket Descriptor: Not Supported 00:22:18.094 SGL Metadata Pointer: Not Supported 00:22:18.094 Oversized SGL: Not Supported 00:22:18.094 SGL Metadata Address: Not Supported 00:22:18.095 SGL Offset: Supported 00:22:18.095 Transport SGL Data Block: Not Supported 00:22:18.095 Replay Protected Memory Block: Not Supported 00:22:18.095 00:22:18.095 Firmware Slot Information 00:22:18.095 ========================= 00:22:18.095 Active slot: 1 00:22:18.095 Slot 1 Firmware Revision: 25.01 00:22:18.095 00:22:18.095 00:22:18.095 Commands Supported and Effects 00:22:18.095 ============================== 00:22:18.095 Admin Commands 00:22:18.095 -------------- 00:22:18.095 Get Log Page (02h): Supported 00:22:18.095 Identify (06h): Supported 00:22:18.095 Abort (08h): Supported 00:22:18.095 Set Features (09h): Supported 00:22:18.095 Get Features (0Ah): Supported 00:22:18.095 Asynchronous Event Request (0Ch): Supported 00:22:18.095 Keep Alive (18h): Supported 00:22:18.095 I/O Commands 00:22:18.095 ------------ 00:22:18.095 Flush (00h): Supported LBA-Change 00:22:18.095 Write (01h): Supported LBA-Change 00:22:18.095 Read (02h): Supported 00:22:18.095 Compare (05h): Supported 00:22:18.095 Write Zeroes (08h): Supported LBA-Change 00:22:18.095 Dataset Management (09h): Supported LBA-Change 00:22:18.095 Copy (19h): Supported LBA-Change 00:22:18.095 00:22:18.095 Error Log 00:22:18.095 ========= 00:22:18.095 00:22:18.095 Arbitration 00:22:18.095 =========== 00:22:18.095 Arbitration Burst: 1 00:22:18.095 00:22:18.095 Power Management 00:22:18.095 ================ 00:22:18.095 Number of Power States: 1 00:22:18.095 Current Power State: Power State #0 00:22:18.095 Power State #0: 00:22:18.095 Max Power: 0.00 W 00:22:18.095 Non-Operational State: Operational 00:22:18.095 Entry Latency: Not Reported 00:22:18.095 Exit Latency: Not Reported 00:22:18.095 Relative Read Throughput: 0 00:22:18.095 Relative Read Latency: 0 00:22:18.095 Relative Write Throughput: 0 00:22:18.095 Relative Write Latency: 0 00:22:18.095 Idle Power: Not Reported 00:22:18.095 Active Power: Not Reported 00:22:18.095 Non-Operational Permissive Mode: Not Supported 00:22:18.095 00:22:18.095 Health Information 00:22:18.095 ================== 00:22:18.095 Critical Warnings: 00:22:18.095 Available Spare Space: OK 00:22:18.095 Temperature: OK 00:22:18.095 Device Reliability: OK 00:22:18.095 Read Only: No 00:22:18.095 Volatile Memory Backup: OK 00:22:18.095 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:18.095 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:18.095 Available Spare: 0% 00:22:18.095 Available Spare Threshold: 0% 00:22:18.095 Life Percentage Used:[2024-11-20 16:15:18.892046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcd5690) 00:22:18.095 [2024-11-20 16:15:18.892056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.095 [2024-11-20 16:15:18.892068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37b80, cid 7, qid 0 00:22:18.095 [2024-11-20 16:15:18.892151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.095 [2024-11-20 16:15:18.892157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.095 [2024-11-20 16:15:18.892160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37b80) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:18.095 [2024-11-20 16:15:18.892199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37100) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.095 [2024-11-20 16:15:18.892212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37280) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.095 [2024-11-20 16:15:18.892221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37400) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.095 [2024-11-20 16:15:18.892229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.095 [2024-11-20 16:15:18.892239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.095 [2024-11-20 16:15:18.892252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.095 [2024-11-20 16:15:18.892263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.095 [2024-11-20 16:15:18.892334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.095 [2024-11-20 16:15:18.892340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.095 [2024-11-20 16:15:18.892343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.095 [2024-11-20 16:15:18.892363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.095 [2024-11-20 16:15:18.892375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.095 [2024-11-20 16:15:18.892449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.095 [2024-11-20 16:15:18.892454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.095 [2024-11-20 16:15:18.892457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892465] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:18.095 [2024-11-20 16:15:18.892469] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:18.095 [2024-11-20 16:15:18.892477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.095 [2024-11-20 16:15:18.892489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.095 [2024-11-20 16:15:18.892498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.095 [2024-11-20 16:15:18.892565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.095 [2024-11-20 16:15:18.892571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.095 [2024-11-20 16:15:18.892576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.095 [2024-11-20 16:15:18.892587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.095 [2024-11-20 16:15:18.892594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.095 [2024-11-20 16:15:18.892599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.892608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.892684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.892690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.892693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.892705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.892717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.892726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.892793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.892798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.892801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.892813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.892825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.892835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.892894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.892899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.892902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.892914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.892920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.892926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.892935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.893012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.893018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.893021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.893034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.893046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.893056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.893128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.893134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.893136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.893148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.893160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.893169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.893236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.893241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.893244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.893256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.893269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.893279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.893341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.893346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.893349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.893360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.096 [2024-11-20 16:15:18.893372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.096 [2024-11-20 16:15:18.893381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.096 [2024-11-20 16:15:18.893458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.096 [2024-11-20 16:15:18.893464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.096 [2024-11-20 16:15:18.893467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.096 [2024-11-20 16:15:18.893480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.096 [2024-11-20 16:15:18.893487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.893492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.893501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.893573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.893579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.893582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.893593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.893605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.893615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.893676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.893682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.893684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.893697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.893709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.893718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.893779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.893785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.893788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.893799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.893811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.893820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.893878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.893884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.893886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.893898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.893906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.893912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.893921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.893998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.894004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.894006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.894018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.894030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.894040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.894105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.894111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.894114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.894126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.894138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.894147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.894210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.894216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.894219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.894230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.894242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.894251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.894309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.894315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.894318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.894329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.894343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.894352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.894427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.894432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.894435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.097 [2024-11-20 16:15:18.894446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.097 [2024-11-20 16:15:18.894458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.097 [2024-11-20 16:15:18.894467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.097 [2024-11-20 16:15:18.894536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.097 [2024-11-20 16:15:18.894541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.097 [2024-11-20 16:15:18.894544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.097 [2024-11-20 16:15:18.894547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.098 [2024-11-20 16:15:18.894556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.098 [2024-11-20 16:15:18.894568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.098 [2024-11-20 16:15:18.894577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.098 [2024-11-20 16:15:18.894635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.098 [2024-11-20 16:15:18.894641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.098 [2024-11-20 16:15:18.894644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.098 [2024-11-20 16:15:18.894655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.098 [2024-11-20 16:15:18.894667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.098 [2024-11-20 16:15:18.894676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.098 [2024-11-20 16:15:18.894736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.098 [2024-11-20 16:15:18.894741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.098 [2024-11-20 16:15:18.894744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.098 [2024-11-20 16:15:18.894755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.098 [2024-11-20 16:15:18.894767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.098 [2024-11-20 16:15:18.894779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.098 [2024-11-20 16:15:18.894837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.098 [2024-11-20 16:15:18.894842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.098 [2024-11-20 16:15:18.894845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.098 [2024-11-20 16:15:18.894856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.894863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.098 [2024-11-20 16:15:18.894868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.098 [2024-11-20 16:15:18.894878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.098 [2024-11-20 16:15:18.894942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.098 [2024-11-20 16:15:18.898952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.098 [2024-11-20 16:15:18.898957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.898960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.098 [2024-11-20 16:15:18.898970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.898973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.898976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd5690) 00:22:18.098 [2024-11-20 16:15:18.898982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.098 [2024-11-20 16:15:18.898993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd37580, cid 3, qid 0 00:22:18.098 [2024-11-20 16:15:18.899144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:18.098 [2024-11-20 16:15:18.899150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:18.098 [2024-11-20 16:15:18.899153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:18.098 [2024-11-20 16:15:18.899156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd37580) on tqpair=0xcd5690 00:22:18.098 [2024-11-20 16:15:18.899163] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:18.098 0% 00:22:18.098 Data Units Read: 0 00:22:18.098 Data Units Written: 0 00:22:18.098 Host Read Commands: 0 00:22:18.098 Host Write Commands: 0 00:22:18.098 Controller Busy Time: 0 minutes 00:22:18.098 Power Cycles: 0 00:22:18.098 Power On Hours: 0 hours 00:22:18.098 Unsafe Shutdowns: 0 00:22:18.098 Unrecoverable Media Errors: 0 00:22:18.098 Lifetime Error Log Entries: 0 00:22:18.098 Warning Temperature Time: 0 minutes 00:22:18.098 Critical Temperature Time: 0 minutes 00:22:18.098 00:22:18.098 Number of Queues 00:22:18.098 ================ 00:22:18.098 Number of I/O Submission Queues: 127 00:22:18.098 Number of I/O Completion Queues: 127 00:22:18.098 00:22:18.098 Active Namespaces 00:22:18.098 ================= 00:22:18.098 Namespace ID:1 00:22:18.098 Error Recovery Timeout: Unlimited 00:22:18.098 Command Set Identifier: NVM (00h) 00:22:18.098 Deallocate: Supported 00:22:18.098 Deallocated/Unwritten Error: Not Supported 00:22:18.098 Deallocated Read Value: Unknown 00:22:18.098 Deallocate in Write Zeroes: Not Supported 00:22:18.098 Deallocated Guard Field: 0xFFFF 00:22:18.098 Flush: Supported 00:22:18.098 Reservation: Supported 00:22:18.098 Namespace Sharing Capabilities: Multiple Controllers 00:22:18.098 Size (in LBAs): 131072 (0GiB) 00:22:18.098 Capacity (in LBAs): 131072 (0GiB) 00:22:18.098 Utilization (in LBAs): 131072 (0GiB) 00:22:18.098 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:18.098 EUI64: ABCDEF0123456789 00:22:18.098 UUID: 82ac9fab-ff7d-4fac-b01f-453c198901a8 00:22:18.098 Thin Provisioning: Not Supported 00:22:18.098 Per-NS Atomic Units: Yes 00:22:18.098 Atomic Boundary Size (Normal): 0 00:22:18.098 Atomic Boundary Size (PFail): 0 00:22:18.098 Atomic Boundary Offset: 0 00:22:18.098 Maximum Single Source Range Length: 65535 00:22:18.098 Maximum Copy Length: 65535 00:22:18.098 Maximum Source Range Count: 1 00:22:18.098 NGUID/EUI64 Never Reused: No 00:22:18.098 Namespace Write Protected: No 00:22:18.098 Number of LBA Formats: 1 00:22:18.098 Current LBA Format: LBA Format #00 00:22:18.098 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:18.098 00:22:18.357 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.358 rmmod nvme_tcp 00:22:18.358 rmmod nvme_fabrics 00:22:18.358 rmmod nvme_keyring 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2817158 ']' 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2817158 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2817158 ']' 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2817158 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.358 16:15:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817158 00:22:18.358 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.358 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.358 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817158' 00:22:18.358 killing process with pid 2817158 00:22:18.358 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2817158 00:22:18.358 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2817158 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.617 16:15:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.524 00:22:20.524 real 0m10.021s 00:22:20.524 user 0m8.306s 00:22:20.524 sys 0m4.899s 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.524 ************************************ 00:22:20.524 END TEST nvmf_identify 00:22:20.524 ************************************ 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.524 16:15:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.783 ************************************ 00:22:20.783 START TEST nvmf_perf 00:22:20.783 ************************************ 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:20.783 * Looking for test storage... 00:22:20.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.783 --rc genhtml_branch_coverage=1 00:22:20.783 --rc genhtml_function_coverage=1 00:22:20.783 --rc genhtml_legend=1 00:22:20.783 --rc geninfo_all_blocks=1 00:22:20.783 --rc geninfo_unexecuted_blocks=1 00:22:20.783 00:22:20.783 ' 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.783 --rc genhtml_branch_coverage=1 00:22:20.783 --rc genhtml_function_coverage=1 00:22:20.783 --rc genhtml_legend=1 00:22:20.783 --rc geninfo_all_blocks=1 00:22:20.783 --rc geninfo_unexecuted_blocks=1 00:22:20.783 00:22:20.783 ' 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.783 --rc genhtml_branch_coverage=1 00:22:20.783 --rc genhtml_function_coverage=1 00:22:20.783 --rc genhtml_legend=1 00:22:20.783 --rc geninfo_all_blocks=1 00:22:20.783 --rc geninfo_unexecuted_blocks=1 00:22:20.783 00:22:20.783 ' 00:22:20.783 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.784 --rc genhtml_branch_coverage=1 00:22:20.784 --rc genhtml_function_coverage=1 00:22:20.784 --rc genhtml_legend=1 00:22:20.784 --rc geninfo_all_blocks=1 00:22:20.784 --rc geninfo_unexecuted_blocks=1 00:22:20.784 00:22:20.784 ' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.784 16:15:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.365 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.366 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.366 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:22:27.366 00:22:27.366 --- 10.0.0.2 ping statistics --- 00:22:27.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.366 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:22:27.366 00:22:27.366 --- 10.0.0.1 ping statistics --- 00:22:27.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.366 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2820934 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2820934 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2820934 ']' 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.366 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:27.366 [2024-11-20 16:15:27.580221] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:27.366 [2024-11-20 16:15:27.580268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.366 [2024-11-20 16:15:27.658256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.367 [2024-11-20 16:15:27.700636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.367 [2024-11-20 16:15:27.700677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.367 [2024-11-20 16:15:27.700685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.367 [2024-11-20 16:15:27.700691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.367 [2024-11-20 16:15:27.700695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.367 [2024-11-20 16:15:27.702155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.367 [2024-11-20 16:15:27.702267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.367 [2024-11-20 16:15:27.702371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.367 [2024-11-20 16:15:27.702373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:27.367 16:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:30.650 16:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:30.650 16:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:30.650 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.908 [2024-11-20 16:15:31.490653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.908 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.908 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:30.908 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.167 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:31.167 16:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:31.425 16:15:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.684 [2024-11-20 16:15:32.295136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.684 16:15:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.942 16:15:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:31.942 16:15:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:31.942 16:15:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:31.942 16:15:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:33.318 Initializing NVMe Controllers 00:22:33.318 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:33.318 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:33.318 Initialization complete. Launching workers. 00:22:33.318 ======================================================== 00:22:33.318 Latency(us) 00:22:33.318 Device Information : IOPS MiB/s Average min max 00:22:33.318 PCIE (0000:5e:00.0) NSID 1 from core 0: 97438.47 380.62 327.96 9.69 5341.43 00:22:33.318 ======================================================== 00:22:33.318 Total : 97438.47 380.62 327.96 9.69 5341.43 00:22:33.318 00:22:33.318 16:15:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.357 Initializing NVMe Controllers 00:22:34.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.357 Initialization complete. Launching workers. 00:22:34.357 ======================================================== 00:22:34.357 Latency(us) 00:22:34.357 Device Information : IOPS MiB/s Average min max 00:22:34.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 409.00 1.60 2537.34 105.61 44697.71 00:22:34.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19700.56 7949.74 47899.47 00:22:34.357 ======================================================== 00:22:34.357 Total : 460.00 1.80 4440.22 105.61 47899.47 00:22:34.357 00:22:34.357 16:15:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:35.773 Initializing NVMe Controllers 00:22:35.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.773 Initialization complete. Launching workers. 00:22:35.773 ======================================================== 00:22:35.773 Latency(us) 00:22:35.773 Device Information : IOPS MiB/s Average min max 00:22:35.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10864.26 42.44 2944.11 343.00 7831.07 00:22:35.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3784.81 14.78 8482.31 5461.06 16099.12 00:22:35.773 ======================================================== 00:22:35.773 Total : 14649.07 57.22 4374.99 343.00 16099.12 00:22:35.773 00:22:35.773 16:15:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:35.773 16:15:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:35.773 16:15:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.302 Initializing NVMe Controllers 00:22:38.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.302 Controller IO queue size 128, less than required. 00:22:38.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.302 Controller IO queue size 128, less than required. 00:22:38.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:38.302 Initialization complete. Launching workers. 00:22:38.302 ======================================================== 00:22:38.302 Latency(us) 00:22:38.302 Device Information : IOPS MiB/s Average min max 00:22:38.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.19 437.30 74296.66 47841.62 112989.23 00:22:38.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.39 150.35 221618.97 80342.74 321286.91 00:22:38.302 ======================================================== 00:22:38.302 Total : 2350.58 587.65 111988.86 47841.62 321286.91 00:22:38.302 00:22:38.302 16:15:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:38.302 No valid NVMe controllers or AIO or URING devices found 00:22:38.302 Initializing NVMe Controllers 00:22:38.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.302 Controller IO queue size 128, less than required. 00:22:38.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.302 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:38.302 Controller IO queue size 128, less than required. 00:22:38.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.302 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:38.302 WARNING: Some requested NVMe devices were skipped 00:22:38.302 16:15:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:40.830 Initializing NVMe Controllers 00:22:40.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.830 Controller IO queue size 128, less than required. 00:22:40.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.830 Controller IO queue size 128, less than required. 00:22:40.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.830 Initialization complete. Launching workers. 00:22:40.830 00:22:40.830 ==================== 00:22:40.830 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:40.830 TCP transport: 00:22:40.830 polls: 10736 00:22:40.830 idle_polls: 7349 00:22:40.830 sock_completions: 3387 00:22:40.830 nvme_completions: 6289 00:22:40.830 submitted_requests: 9472 00:22:40.830 queued_requests: 1 00:22:40.830 00:22:40.830 ==================== 00:22:40.830 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:40.830 TCP transport: 00:22:40.830 polls: 14437 00:22:40.830 idle_polls: 11036 00:22:40.830 sock_completions: 3401 00:22:40.830 nvme_completions: 6349 00:22:40.830 submitted_requests: 9552 00:22:40.830 queued_requests: 1 00:22:40.830 ======================================================== 00:22:40.830 Latency(us) 00:22:40.830 Device Information : IOPS MiB/s Average min max 00:22:40.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1571.87 392.97 84144.04 54199.44 129433.23 00:22:40.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1586.87 396.72 81399.51 45607.99 114207.42 00:22:40.830 ======================================================== 00:22:40.830 Total : 3158.74 789.68 82765.26 45607.99 129433.23 00:22:40.830 00:22:40.830 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:40.830 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.087 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:41.087 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:41.087 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:41.087 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.087 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:41.088 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.088 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:41.088 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.088 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.088 rmmod nvme_tcp 00:22:41.088 rmmod nvme_fabrics 00:22:41.088 rmmod nvme_keyring 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2820934 ']' 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2820934 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2820934 ']' 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2820934 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820934 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820934' 00:22:41.346 killing process with pid 2820934 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2820934 00:22:41.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2820934 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.721 16:15:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.257 00:22:45.257 real 0m24.175s 00:22:45.257 user 1m2.749s 00:22:45.257 sys 0m8.325s 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:45.257 ************************************ 00:22:45.257 END TEST nvmf_perf 00:22:45.257 ************************************ 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.257 16:15:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 ************************************ 00:22:45.258 START TEST nvmf_fio_host 00:22:45.258 ************************************ 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:45.258 * Looking for test storage... 00:22:45.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.258 --rc genhtml_branch_coverage=1 00:22:45.258 --rc genhtml_function_coverage=1 00:22:45.258 --rc genhtml_legend=1 00:22:45.258 --rc geninfo_all_blocks=1 00:22:45.258 --rc geninfo_unexecuted_blocks=1 00:22:45.258 00:22:45.258 ' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.258 --rc genhtml_branch_coverage=1 00:22:45.258 --rc genhtml_function_coverage=1 00:22:45.258 --rc genhtml_legend=1 00:22:45.258 --rc geninfo_all_blocks=1 00:22:45.258 --rc geninfo_unexecuted_blocks=1 00:22:45.258 00:22:45.258 ' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.258 --rc genhtml_branch_coverage=1 00:22:45.258 --rc genhtml_function_coverage=1 00:22:45.258 --rc genhtml_legend=1 00:22:45.258 --rc geninfo_all_blocks=1 00:22:45.258 --rc geninfo_unexecuted_blocks=1 00:22:45.258 00:22:45.258 ' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.258 --rc genhtml_branch_coverage=1 00:22:45.258 --rc genhtml_function_coverage=1 00:22:45.258 --rc genhtml_legend=1 00:22:45.258 --rc geninfo_all_blocks=1 00:22:45.258 --rc geninfo_unexecuted_blocks=1 00:22:45.258 00:22:45.258 ' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.258 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.259 16:15:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.827 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.827 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.827 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:22:51.828 00:22:51.828 --- 10.0.0.2 ping statistics --- 00:22:51.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.828 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:51.828 00:22:51.828 --- 10.0.0.1 ping statistics --- 00:22:51.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.828 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2827042 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2827042 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2827042 ']' 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.828 16:15:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.828 [2024-11-20 16:15:51.811731] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:51.828 [2024-11-20 16:15:51.811776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.828 [2024-11-20 16:15:51.892476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.828 [2024-11-20 16:15:51.935215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.828 [2024-11-20 16:15:51.935253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.828 [2024-11-20 16:15:51.935260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.828 [2024-11-20 16:15:51.935266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.828 [2024-11-20 16:15:51.935271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.828 [2024-11-20 16:15:51.936872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.828 [2024-11-20 16:15:51.936994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.828 [2024-11-20 16:15:51.937047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.828 [2024-11-20 16:15:51.937048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:51.828 [2024-11-20 16:15:52.215461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:51.828 Malloc1 00:22:51.828 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.085 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:52.085 16:15:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.343 [2024-11-20 16:15:53.100832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.343 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:52.601 16:15:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:52.859 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:52.859 fio-3.35 00:22:52.859 Starting 1 thread 00:22:55.397 00:22:55.397 test: (groupid=0, jobs=1): err= 0: pid=2827421: Wed Nov 20 16:15:55 2024 00:22:55.397 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2005msec) 00:22:55.397 slat (nsec): min=1592, max=239887, avg=1730.97, stdev=2235.22 00:22:55.397 clat (usec): min=3123, max=10942, avg=6106.44, stdev=459.39 00:22:55.397 lat (usec): min=3157, max=10944, avg=6108.17, stdev=459.30 00:22:55.397 clat percentiles (usec): 00:22:55.397 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:55.397 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:22:55.397 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:22:55.397 | 99.00th=[ 7111], 99.50th=[ 7177], 99.90th=[ 8979], 99.95th=[10290], 00:22:55.397 | 99.99th=[10421] 00:22:55.397 bw ( KiB/s): min=45536, max=46984, per=99.91%, avg=46374.00, stdev=622.44, samples=4 00:22:55.397 iops : min=11384, max=11746, avg=11593.50, stdev=155.61, samples=4 00:22:55.397 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.2MiB/2005msec); 0 zone resets 00:22:55.397 slat (nsec): min=1620, max=233562, avg=1790.13, stdev=1698.27 00:22:55.397 clat (usec): min=2438, max=9715, avg=4915.02, stdev=372.25 00:22:55.397 lat (usec): min=2454, max=9717, avg=4916.81, stdev=372.23 00:22:55.397 clat percentiles (usec): 00:22:55.397 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:55.397 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 5014], 00:22:55.397 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:55.397 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6980], 99.95th=[ 8979], 00:22:55.397 | 99.99th=[ 9634] 00:22:55.397 bw ( KiB/s): min=45824, max=46400, per=100.00%, avg=46098.00, stdev=281.28, samples=4 00:22:55.397 iops : min=11456, max=11600, avg=11524.50, stdev=70.32, samples=4 00:22:55.397 lat (msec) : 4=0.33%, 10=99.64%, 20=0.03% 00:22:55.397 cpu : usr=73.80%, sys=25.25%, ctx=95, majf=0, minf=3 00:22:55.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:55.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:55.397 issued rwts: total=23266,23097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:55.398 00:22:55.398 Run status group 0 (all jobs): 00:22:55.398 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2005-2005msec 00:22:55.398 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.2MiB (94.6MB), run=2005-2005msec 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:55.398 16:15:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:55.655 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:55.655 fio-3.35 00:22:55.655 Starting 1 thread 00:22:58.183 00:22:58.184 test: (groupid=0, jobs=1): err= 0: pid=2827994: Wed Nov 20 16:15:58 2024 00:22:58.184 read: IOPS=10.8k, BW=168MiB/s (177MB/s)(338MiB/2006msec) 00:22:58.184 slat (nsec): min=2570, max=86635, avg=2880.28, stdev=1489.42 00:22:58.184 clat (usec): min=1360, max=13596, avg=6765.02, stdev=1611.96 00:22:58.184 lat (usec): min=1363, max=13598, avg=6767.90, stdev=1612.09 00:22:58.184 clat percentiles (usec): 00:22:58.184 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5407], 00:22:58.184 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:22:58.184 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9503], 00:22:58.184 | 99.00th=[11469], 99.50th=[11731], 99.90th=[13042], 99.95th=[13435], 00:22:58.184 | 99.99th=[13566] 00:22:58.184 bw ( KiB/s): min=83584, max=95872, per=51.20%, avg=88248.00, stdev=5304.31, samples=4 00:22:58.184 iops : min= 5224, max= 5992, avg=5515.50, stdev=331.52, samples=4 00:22:58.184 write: IOPS=6390, BW=99.8MiB/s (105MB/s)(181MiB/1811msec); 0 zone resets 00:22:58.184 slat (usec): min=29, max=387, avg=32.28, stdev= 7.99 00:22:58.184 clat (usec): min=2001, max=16941, avg=8795.93, stdev=1493.18 00:22:58.184 lat (usec): min=2031, max=16972, avg=8828.20, stdev=1494.76 00:22:58.184 clat percentiles (usec): 00:22:58.184 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7570], 00:22:58.184 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 8979], 00:22:58.184 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10814], 95.00th=[11338], 00:22:58.184 | 99.00th=[12780], 99.50th=[13435], 99.90th=[16319], 99.95th=[16712], 00:22:58.184 | 99.99th=[16909] 00:22:58.184 bw ( KiB/s): min=87744, max=99712, per=89.99%, avg=92008.00, stdev=5303.99, samples=4 00:22:58.184 iops : min= 5484, max= 6232, avg=5750.50, stdev=331.50, samples=4 00:22:58.184 lat (msec) : 2=0.05%, 4=1.70%, 10=88.85%, 20=9.40% 00:22:58.184 cpu : usr=85.99%, sys=11.82%, ctx=167, majf=0, minf=3 00:22:58.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:58.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:58.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:58.184 issued rwts: total=21611,11573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:58.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:58.184 00:22:58.184 Run status group 0 (all jobs): 00:22:58.184 READ: bw=168MiB/s (177MB/s), 168MiB/s-168MiB/s (177MB/s-177MB/s), io=338MiB (354MB), run=2006-2006msec 00:22:58.184 WRITE: bw=99.8MiB/s (105MB/s), 99.8MiB/s-99.8MiB/s (105MB/s-105MB/s), io=181MiB (190MB), run=1811-1811msec 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.184 rmmod nvme_tcp 00:22:58.184 rmmod nvme_fabrics 00:22:58.184 rmmod nvme_keyring 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2827042 ']' 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2827042 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2827042 ']' 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2827042 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827042 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827042' 00:22:58.184 killing process with pid 2827042 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2827042 00:22:58.184 16:15:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2827042 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.445 16:15:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.995 00:23:00.995 real 0m15.622s 00:23:00.995 user 0m45.370s 00:23:00.995 sys 0m6.403s 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.995 ************************************ 00:23:00.995 END TEST nvmf_fio_host 00:23:00.995 ************************************ 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.995 ************************************ 00:23:00.995 START TEST nvmf_failover 00:23:00.995 ************************************ 00:23:00.995 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:00.995 * Looking for test storage... 00:23:00.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.996 --rc genhtml_branch_coverage=1 00:23:00.996 --rc genhtml_function_coverage=1 00:23:00.996 --rc genhtml_legend=1 00:23:00.996 --rc geninfo_all_blocks=1 00:23:00.996 --rc geninfo_unexecuted_blocks=1 00:23:00.996 00:23:00.996 ' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.996 --rc genhtml_branch_coverage=1 00:23:00.996 --rc genhtml_function_coverage=1 00:23:00.996 --rc genhtml_legend=1 00:23:00.996 --rc geninfo_all_blocks=1 00:23:00.996 --rc geninfo_unexecuted_blocks=1 00:23:00.996 00:23:00.996 ' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.996 --rc genhtml_branch_coverage=1 00:23:00.996 --rc genhtml_function_coverage=1 00:23:00.996 --rc genhtml_legend=1 00:23:00.996 --rc geninfo_all_blocks=1 00:23:00.996 --rc geninfo_unexecuted_blocks=1 00:23:00.996 00:23:00.996 ' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.996 --rc genhtml_branch_coverage=1 00:23:00.996 --rc genhtml_function_coverage=1 00:23:00.996 --rc genhtml_legend=1 00:23:00.996 --rc geninfo_all_blocks=1 00:23:00.996 --rc geninfo_unexecuted_blocks=1 00:23:00.996 00:23:00.996 ' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.996 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.997 16:16:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.572 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.573 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.573 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:23:07.573 00:23:07.573 --- 10.0.0.2 ping statistics --- 00:23:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.573 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:23:07.573 00:23:07.573 --- 10.0.0.1 ping statistics --- 00:23:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.573 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2831943 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2831943 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2831943 ']' 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.573 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:07.573 [2024-11-20 16:16:07.504990] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:07.573 [2024-11-20 16:16:07.505044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.573 [2024-11-20 16:16:07.583693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.573 [2024-11-20 16:16:07.626193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.573 [2024-11-20 16:16:07.626232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.573 [2024-11-20 16:16:07.626239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.573 [2024-11-20 16:16:07.626246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.573 [2024-11-20 16:16:07.626251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.573 [2024-11-20 16:16:07.627598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.573 [2024-11-20 16:16:07.627704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.573 [2024-11-20 16:16:07.627704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:07.574 [2024-11-20 16:16:07.933057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.574 16:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:07.574 Malloc0 00:23:07.574 16:16:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.574 16:16:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.832 16:16:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.090 [2024-11-20 16:16:08.742292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.090 16:16:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:08.347 [2024-11-20 16:16:08.950864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:08.347 16:16:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:08.347 [2024-11-20 16:16:09.143454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2832225 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2832225 /var/tmp/bdevperf.sock 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2832225 ']' 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.347 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:08.911 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.911 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:08.911 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:08.911 NVMe0n1 00:23:08.911 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:09.168 00:23:09.168 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2832253 00:23:09.168 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.168 16:16:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:10.540 16:16:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.540 [2024-11-20 16:16:11.150387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 [2024-11-20 16:16:11.150521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13152d0 is same with the state(6) to be set 00:23:10.540 16:16:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:13.820 16:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:13.820 00:23:13.820 16:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.078 [2024-11-20 16:16:14.663053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 [2024-11-20 16:16:14.663137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315fa0 is same with the state(6) to be set 00:23:14.078 16:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:17.357 16:16:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.357 [2024-11-20 16:16:17.876599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.357 16:16:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:18.290 16:16:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:18.290 [2024-11-20 16:16:19.097507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.290 [2024-11-20 16:16:19.097589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316ce0 is same with the state(6) to be set 00:23:18.547 16:16:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2832253 00:23:25.108 { 00:23:25.108 "results": [ 00:23:25.108 { 00:23:25.108 "job": "NVMe0n1", 00:23:25.108 "core_mask": "0x1", 00:23:25.108 "workload": "verify", 00:23:25.108 "status": "finished", 00:23:25.108 "verify_range": { 00:23:25.108 "start": 0, 00:23:25.108 "length": 16384 00:23:25.108 }, 00:23:25.108 "queue_depth": 128, 00:23:25.108 "io_size": 4096, 00:23:25.108 "runtime": 15.001345, 00:23:25.108 "iops": 10961.417126264345, 00:23:25.108 "mibps": 42.8180356494701, 00:23:25.108 "io_failed": 7677, 00:23:25.108 "io_timeout": 0, 00:23:25.108 "avg_latency_us": 11133.678881442653, 00:23:25.108 "min_latency_us": 432.7513043478261, 00:23:25.108 "max_latency_us": 20857.544347826086 00:23:25.108 } 00:23:25.108 ], 00:23:25.108 "core_count": 1 00:23:25.108 } 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2832225 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2832225 ']' 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2832225 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832225 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832225' 00:23:25.108 killing process with pid 2832225 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2832225 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2832225 00:23:25.108 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.108 [2024-11-20 16:16:09.221700] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:25.108 [2024-11-20 16:16:09.221755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832225 ] 00:23:25.108 [2024-11-20 16:16:09.297651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.108 [2024-11-20 16:16:09.339334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.108 Running I/O for 15 seconds... 00:23:25.108 11088.00 IOPS, 43.31 MiB/s [2024-11-20T15:16:25.945Z] [2024-11-20 16:16:11.151831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.108 [2024-11-20 16:16:11.151865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.151989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.151997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.109 [2024-11-20 16:16:11.152477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.109 [2024-11-20 16:16:11.152486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.110 [2024-11-20 16:16:11.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.110 [2024-11-20 16:16:11.152630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.110 [2024-11-20 16:16:11.152645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.152986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.152993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.153003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.153009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.153017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.153024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.153032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.153038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.110 [2024-11-20 16:16:11.153047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.110 [2024-11-20 16:16:11.153053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.111 [2024-11-20 16:16:11.153068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.111 [2024-11-20 16:16:11.153082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.111 [2024-11-20 16:16:11.153098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.111 [2024-11-20 16:16:11.153113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.111 [2024-11-20 16:16:11.153128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:23:25.111 [2024-11-20 16:16:11.153571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.111 [2024-11-20 16:16:11.153580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.111 [2024-11-20 16:16:11.153585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.111 [2024-11-20 16:16:11.153590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.153977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.153983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.153991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.153995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.154001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.154008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.154014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.154019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.154025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.154032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.154039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.154045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.164867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.164886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.164892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.164897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.164904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.164911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.164916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.112 [2024-11-20 16:16:11.164922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:23:25.112 [2024-11-20 16:16:11.164928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.112 [2024-11-20 16:16:11.164934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.112 [2024-11-20 16:16:11.164939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.113 [2024-11-20 16:16:11.164945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:23:25.113 [2024-11-20 16:16:11.164954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.164964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.113 [2024-11-20 16:16:11.164969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.113 [2024-11-20 16:16:11.164974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:23:25.113 [2024-11-20 16:16:11.164981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.164987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.113 [2024-11-20 16:16:11.164992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.113 [2024-11-20 16:16:11.164998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:23:25.113 [2024-11-20 16:16:11.165004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.113 [2024-11-20 16:16:11.165017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.113 [2024-11-20 16:16:11.165022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:23:25.113 [2024-11-20 16:16:11.165029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.113 [2024-11-20 16:16:11.165040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.113 [2024-11-20 16:16:11.165045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:23:25.113 [2024-11-20 16:16:11.165052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165096] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:25.113 [2024-11-20 16:16:11.165119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.113 [2024-11-20 16:16:11.165127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.113 [2024-11-20 16:16:11.165141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.113 [2024-11-20 16:16:11.165154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.113 [2024-11-20 16:16:11.165183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:11.165193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:25.113 [2024-11-20 16:16:11.165238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a9340 (9): Bad file descriptor 00:23:25.113 [2024-11-20 16:16:11.169106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:25.113 [2024-11-20 16:16:11.195502] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:25.113 10903.50 IOPS, 42.59 MiB/s [2024-11-20T15:16:25.950Z] 10987.67 IOPS, 42.92 MiB/s [2024-11-20T15:16:25.950Z] 11046.00 IOPS, 43.15 MiB/s [2024-11-20T15:16:25.950Z] [2024-11-20 16:16:14.665592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.113 [2024-11-20 16:16:14.665758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.113 [2024-11-20 16:16:14.665773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.113 [2024-11-20 16:16:14.665782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.113 [2024-11-20 16:16:14.665788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.114 [2024-11-20 16:16:14.665881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.665988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.665996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.114 [2024-11-20 16:16:14.666327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.114 [2024-11-20 16:16:14.666334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.115 [2024-11-20 16:16:14.666580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24088 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24104 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24112 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24120 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.115 [2024-11-20 16:16:14.666761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24136 len:8 PRP1 0x0 PRP2 0x0 00:23:25.115 [2024-11-20 16:16:14.666768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.115 [2024-11-20 16:16:14.666775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.115 [2024-11-20 16:16:14.666780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24144 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24152 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24168 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24184 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24200 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.666978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24208 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.666985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.666993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.666998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24216 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.667023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.667046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24232 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.667069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24240 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.667092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24248 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.667115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.116 [2024-11-20 16:16:14.667140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.116 [2024-11-20 16:16:14.667146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24264 len:8 PRP1 0x0 PRP2 0x0 00:23:25.116 [2024-11-20 16:16:14.667152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.116 [2024-11-20 16:16:14.667158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24272 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24280 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24296 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24304 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24312 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24328 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24336 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24344 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24360 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24368 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24376 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.117 [2024-11-20 16:16:14.667531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24392 len:8 PRP1 0x0 PRP2 0x0 00:23:25.117 [2024-11-20 16:16:14.667537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.117 [2024-11-20 16:16:14.667543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.117 [2024-11-20 16:16:14.667548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24400 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.667559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.667567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.667572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24408 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.667583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.667590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.667594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.667606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.667612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.667617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24424 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.667628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.667635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.667640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24432 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.667651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.667657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.667662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24440 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.667674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.667682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.667687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.667692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.677800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.677816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.677824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.677831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24456 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.677840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.677849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.677855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.677862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24464 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.677871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.677881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.677888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.677895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24472 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.677903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.677913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.677919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.677927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.677935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.677944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.677955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.677963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24488 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.677971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.677980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.677987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.677994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24496 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.678003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.678012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.678018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.678025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24504 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.678034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.678046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.678053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.678071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.678083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.678093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.678099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.118 [2024-11-20 16:16:14.678106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24520 len:8 PRP1 0x0 PRP2 0x0 00:23:25.118 [2024-11-20 16:16:14.678115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.118 [2024-11-20 16:16:14.678124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.118 [2024-11-20 16:16:14.678130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24528 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24536 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24552 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24560 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24568 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.119 [2024-11-20 16:16:14.678352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.119 [2024-11-20 16:16:14.678359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24584 len:8 PRP1 0x0 PRP2 0x0 00:23:25.119 [2024-11-20 16:16:14.678368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678418] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:25.119 [2024-11-20 16:16:14.678448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.119 [2024-11-20 16:16:14.678458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.119 [2024-11-20 16:16:14.678477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.119 [2024-11-20 16:16:14.678496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.119 [2024-11-20 16:16:14.678515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.119 [2024-11-20 16:16:14.678524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:25.119 [2024-11-20 16:16:14.678563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a9340 (9): Bad file descriptor 00:23:25.119 [2024-11-20 16:16:14.682434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:25.119 [2024-11-20 16:16:14.747396] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:25.119 10875.40 IOPS, 42.48 MiB/s [2024-11-20T15:16:25.956Z] 10917.33 IOPS, 42.65 MiB/s [2024-11-20T15:16:25.956Z] 10966.43 IOPS, 42.84 MiB/s [2024-11-20T15:16:25.956Z] 10961.25 IOPS, 42.82 MiB/s [2024-11-20T15:16:25.956Z] 10972.33 IOPS, 42.86 MiB/s [2024-11-20T15:16:25.956Z] [2024-11-20 16:16:19.099086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.120 [2024-11-20 16:16:19.099118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.120 [2024-11-20 16:16:19.099268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.120 [2024-11-20 16:16:19.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.120 [2024-11-20 16:16:19.099538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.121 [2024-11-20 16:16:19.099931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.121 [2024-11-20 16:16:19.099939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.099946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.099958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.099965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.099972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.099979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.099987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.099993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.100008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.100022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.100036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.100050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.100065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.122 [2024-11-20 16:16:19.100079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40280 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40288 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40296 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40304 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40312 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40320 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40328 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40336 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40344 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40352 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40360 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40368 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40376 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.122 [2024-11-20 16:16:19.100415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.122 [2024-11-20 16:16:19.100420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40384 len:8 PRP1 0x0 PRP2 0x0 00:23:25.122 [2024-11-20 16:16:19.100426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.122 [2024-11-20 16:16:19.100433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40392 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40400 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40408 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40416 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40424 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40432 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40440 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40448 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40456 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40464 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40472 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40480 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40488 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40496 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40504 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40512 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40520 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40528 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40536 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40544 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40552 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40560 len:8 PRP1 0x0 PRP2 0x0 00:23:25.123 [2024-11-20 16:16:19.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.123 [2024-11-20 16:16:19.100955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.123 [2024-11-20 16:16:19.100959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.123 [2024-11-20 16:16:19.100965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40568 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.100971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.100978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.100983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.100988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40576 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40584 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40592 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40600 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40608 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40616 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40624 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40632 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.101160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.101167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.101171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.101176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40640 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40648 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40656 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40664 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40672 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40680 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40688 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40696 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40704 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40712 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40720 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39720 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39728 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39736 len:8 PRP1 0x0 PRP2 0x0 00:23:25.124 [2024-11-20 16:16:19.112391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.124 [2024-11-20 16:16:19.112397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.124 [2024-11-20 16:16:19.112402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.124 [2024-11-20 16:16:19.112407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39744 len:8 PRP1 0x0 PRP2 0x0 00:23:25.125 [2024-11-20 16:16:19.112413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.125 [2024-11-20 16:16:19.112424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.125 [2024-11-20 16:16:19.112430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39752 len:8 PRP1 0x0 PRP2 0x0 00:23:25.125 [2024-11-20 16:16:19.112437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.125 [2024-11-20 16:16:19.112448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.125 [2024-11-20 16:16:19.112453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39760 len:8 PRP1 0x0 PRP2 0x0 00:23:25.125 [2024-11-20 16:16:19.112460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.125 [2024-11-20 16:16:19.112472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.125 [2024-11-20 16:16:19.112478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39768 len:8 PRP1 0x0 PRP2 0x0 00:23:25.125 [2024-11-20 16:16:19.112484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112527] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:25.125 [2024-11-20 16:16:19.112550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.125 [2024-11-20 16:16:19.112557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.125 [2024-11-20 16:16:19.112571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.125 [2024-11-20 16:16:19.112585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.125 [2024-11-20 16:16:19.112599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.125 [2024-11-20 16:16:19.112606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:25.125 [2024-11-20 16:16:19.112636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a9340 (9): Bad file descriptor 00:23:25.125 [2024-11-20 16:16:19.116027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:25.125 [2024-11-20 16:16:19.184548] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:25.125 10873.70 IOPS, 42.48 MiB/s [2024-11-20T15:16:25.962Z] 10907.55 IOPS, 42.61 MiB/s [2024-11-20T15:16:25.962Z] 10938.92 IOPS, 42.73 MiB/s [2024-11-20T15:16:25.962Z] 10953.00 IOPS, 42.79 MiB/s [2024-11-20T15:16:25.962Z] 10956.14 IOPS, 42.80 MiB/s 00:23:25.125 Latency(us) 00:23:25.125 [2024-11-20T15:16:25.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.125 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:25.125 Verification LBA range: start 0x0 length 0x4000 00:23:25.125 NVMe0n1 : 15.00 10961.42 42.82 511.75 0.00 11133.68 432.75 20857.54 00:23:25.125 [2024-11-20T15:16:25.962Z] =================================================================================================================== 00:23:25.125 [2024-11-20T15:16:25.962Z] Total : 10961.42 42.82 511.75 0.00 11133.68 432.75 20857.54 00:23:25.125 Received shutdown signal, test time was about 15.000000 seconds 00:23:25.125 00:23:25.125 Latency(us) 00:23:25.125 [2024-11-20T15:16:25.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.125 [2024-11-20T15:16:25.962Z] =================================================================================================================== 00:23:25.125 [2024-11-20T15:16:25.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2834760 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2834760 /var/tmp/bdevperf.sock 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2834760 ']' 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.125 [2024-11-20 16:16:25.796822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.125 16:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:25.383 [2024-11-20 16:16:26.009426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:25.383 16:16:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:25.640 NVMe0n1 00:23:25.641 16:16:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:25.897 00:23:25.897 16:16:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:26.462 00:23:26.462 16:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.462 16:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:26.463 16:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.719 16:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:29.997 16:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.997 16:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:29.997 16:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.998 16:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2835682 00:23:29.998 16:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2835682 00:23:31.366 { 00:23:31.366 "results": [ 00:23:31.366 { 00:23:31.366 "job": "NVMe0n1", 00:23:31.366 "core_mask": "0x1", 00:23:31.366 "workload": "verify", 00:23:31.366 "status": "finished", 00:23:31.366 "verify_range": { 00:23:31.366 "start": 0, 00:23:31.366 "length": 16384 00:23:31.366 }, 00:23:31.366 "queue_depth": 128, 00:23:31.366 "io_size": 4096, 00:23:31.366 "runtime": 1.008283, 00:23:31.366 "iops": 11137.746049472222, 00:23:31.366 "mibps": 43.506820505750866, 00:23:31.366 "io_failed": 0, 00:23:31.366 "io_timeout": 0, 00:23:31.366 "avg_latency_us": 11445.376887064927, 00:23:31.366 "min_latency_us": 2535.958260869565, 00:23:31.366 "max_latency_us": 9289.015652173914 00:23:31.366 } 00:23:31.366 ], 00:23:31.366 "core_count": 1 00:23:31.366 } 00:23:31.366 16:16:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:31.366 [2024-11-20 16:16:25.399333] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:31.366 [2024-11-20 16:16:25.399382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834760 ] 00:23:31.366 [2024-11-20 16:16:25.475307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.366 [2024-11-20 16:16:25.513435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.366 [2024-11-20 16:16:27.433613] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:31.366 [2024-11-20 16:16:27.433661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.366 [2024-11-20 16:16:27.433673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.366 [2024-11-20 16:16:27.433681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.366 [2024-11-20 16:16:27.433688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.366 [2024-11-20 16:16:27.433696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.367 [2024-11-20 16:16:27.433702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.367 [2024-11-20 16:16:27.433710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.367 [2024-11-20 16:16:27.433716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.367 [2024-11-20 16:16:27.433723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:31.367 [2024-11-20 16:16:27.433748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:31.367 [2024-11-20 16:16:27.433763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc340 (9): Bad file descriptor 00:23:31.367 [2024-11-20 16:16:27.444272] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:31.367 Running I/O for 1 seconds... 00:23:31.367 11102.00 IOPS, 43.37 MiB/s 00:23:31.367 Latency(us) 00:23:31.367 [2024-11-20T15:16:32.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.367 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.367 Verification LBA range: start 0x0 length 0x4000 00:23:31.367 NVMe0n1 : 1.01 11137.75 43.51 0.00 0.00 11445.38 2535.96 9289.02 00:23:31.367 [2024-11-20T15:16:32.204Z] =================================================================================================================== 00:23:31.367 [2024-11-20T15:16:32.204Z] Total : 11137.75 43.51 0.00 0.00 11445.38 2535.96 9289.02 00:23:31.367 16:16:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.367 16:16:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:31.367 16:16:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.367 16:16:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.367 16:16:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:31.623 16:16:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.880 16:16:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2834760 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2834760 ']' 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2834760 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834760 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834760' 00:23:35.156 killing process with pid 2834760 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2834760 00:23:35.156 16:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2834760 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.414 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.414 rmmod nvme_tcp 00:23:35.686 rmmod nvme_fabrics 00:23:35.686 rmmod nvme_keyring 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2831943 ']' 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2831943 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2831943 ']' 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2831943 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831943 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831943' 00:23:35.686 killing process with pid 2831943 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2831943 00:23:35.686 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2831943 00:23:36.023 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.023 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.023 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.023 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.024 16:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:37.947 00:23:37.947 real 0m37.276s 00:23:37.947 user 1m57.969s 00:23:37.947 sys 0m7.924s 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:37.947 ************************************ 00:23:37.947 END TEST nvmf_failover 00:23:37.947 ************************************ 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.947 ************************************ 00:23:37.947 START TEST nvmf_host_discovery 00:23:37.947 ************************************ 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:37.947 * Looking for test storage... 00:23:37.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:37.947 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.207 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.208 --rc genhtml_branch_coverage=1 00:23:38.208 --rc genhtml_function_coverage=1 00:23:38.208 --rc genhtml_legend=1 00:23:38.208 --rc geninfo_all_blocks=1 00:23:38.208 --rc geninfo_unexecuted_blocks=1 00:23:38.208 00:23:38.208 ' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.208 --rc genhtml_branch_coverage=1 00:23:38.208 --rc genhtml_function_coverage=1 00:23:38.208 --rc genhtml_legend=1 00:23:38.208 --rc geninfo_all_blocks=1 00:23:38.208 --rc geninfo_unexecuted_blocks=1 00:23:38.208 00:23:38.208 ' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.208 --rc genhtml_branch_coverage=1 00:23:38.208 --rc genhtml_function_coverage=1 00:23:38.208 --rc genhtml_legend=1 00:23:38.208 --rc geninfo_all_blocks=1 00:23:38.208 --rc geninfo_unexecuted_blocks=1 00:23:38.208 00:23:38.208 ' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.208 --rc genhtml_branch_coverage=1 00:23:38.208 --rc genhtml_function_coverage=1 00:23:38.208 --rc genhtml_legend=1 00:23:38.208 --rc geninfo_all_blocks=1 00:23:38.208 --rc geninfo_unexecuted_blocks=1 00:23:38.208 00:23:38.208 ' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.208 16:16:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:44.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:44.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.783 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:44.784 Found net devices under 0000:86:00.0: cvl_0_0 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:44.784 Found net devices under 0000:86:00.1: cvl_0_1 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:23:44.784 00:23:44.784 --- 10.0.0.2 ping statistics --- 00:23:44.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.784 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:44.784 00:23:44.784 --- 10.0.0.1 ping statistics --- 00:23:44.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.784 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2840136 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2840136 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2840136 ']' 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.784 16:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 [2024-11-20 16:16:44.916151] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:44.784 [2024-11-20 16:16:44.916203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.784 [2024-11-20 16:16:44.996792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.784 [2024-11-20 16:16:45.037937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.784 [2024-11-20 16:16:45.037976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.784 [2024-11-20 16:16:45.037983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.784 [2024-11-20 16:16:45.037988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.784 [2024-11-20 16:16:45.037993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.784 [2024-11-20 16:16:45.038582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 [2024-11-20 16:16:45.175215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 [2024-11-20 16:16:45.187396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 null0 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.784 null1 00:23:44.784 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2840157 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2840157 /tmp/host.sock 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2840157 ']' 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:44.785 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 [2024-11-20 16:16:45.264384] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:44.785 [2024-11-20 16:16:45.264428] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840157 ] 00:23:44.785 [2024-11-20 16:16:45.341410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.785 [2024-11-20 16:16:45.384589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:44.785 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 [2024-11-20 16:16:45.804994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:45.044 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:45.303 16:16:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:45.870 [2024-11-20 16:16:46.553478] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:45.870 [2024-11-20 16:16:46.553496] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:45.870 [2024-11-20 16:16:46.553507] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:45.870 [2024-11-20 16:16:46.680901] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:46.129 [2024-11-20 16:16:46.782692] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:46.129 [2024-11-20 16:16:46.783430] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fbddf0:1 started. 00:23:46.129 [2024-11-20 16:16:46.784820] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.129 [2024-11-20 16:16:46.784835] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.129 [2024-11-20 16:16:46.833496] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fbddf0 was disconnected and freed. delete nvme_qpair. 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:46.389 16:16:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.389 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.648 [2024-11-20 16:16:47.388634] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f8c620:1 started. 00:23:46.648 [2024-11-20 16:16:47.393744] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f8c620 was disconnected and freed. delete nvme_qpair. 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.648 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.649 [2024-11-20 16:16:47.473561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.649 [2024-11-20 16:16:47.474014] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:46.649 [2024-11-20 16:16:47.474033] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:46.649 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.908 [2024-11-20 16:16:47.601762] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:46.908 16:16:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:46.908 [2024-11-20 16:16:47.660421] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:46.908 [2024-11-20 16:16:47.660455] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.908 [2024-11-20 16:16:47.660465] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.908 [2024-11-20 16:16:47.660471] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:47.845 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:47.846 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.106 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.107 [2024-11-20 16:16:48.733402] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:48.107 [2024-11-20 16:16:48.733422] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:48.107 [2024-11-20 16:16:48.743071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.107 [2024-11-20 16:16:48.743093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.107 [2024-11-20 16:16:48.743102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.107 [2024-11-20 16:16:48.743110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.107 [2024-11-20 16:16:48.743117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.107 [2024-11-20 16:16:48.743125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.107 [2024-11-20 16:16:48.743132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.107 [2024-11-20 16:16:48.743140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.107 [2024-11-20 16:16:48.743147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:48.107 [2024-11-20 16:16:48.753084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.107 [2024-11-20 16:16:48.763118] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.107 [2024-11-20 16:16:48.763130] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.107 [2024-11-20 16:16:48.763135] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.107 [2024-11-20 16:16:48.763139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.107 [2024-11-20 16:16:48.763155] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:48.107 [2024-11-20 16:16:48.763325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.107 [2024-11-20 16:16:48.763338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8e390 with addr=10.0.0.2, port=4420 00:23:48.107 [2024-11-20 16:16:48.763346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.107 [2024-11-20 16:16:48.763357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.107 [2024-11-20 16:16:48.763374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:48.107 [2024-11-20 16:16:48.763381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:48.107 [2024-11-20 16:16:48.763389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:48.107 [2024-11-20 16:16:48.763396] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:48.107 [2024-11-20 16:16:48.763401] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:48.107 [2024-11-20 16:16:48.763408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:48.107 [2024-11-20 16:16:48.773187] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.107 [2024-11-20 16:16:48.773198] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.107 [2024-11-20 16:16:48.773202] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.107 [2024-11-20 16:16:48.773206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.107 [2024-11-20 16:16:48.773220] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:48.107 [2024-11-20 16:16:48.773484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.107 [2024-11-20 16:16:48.773497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8e390 with addr=10.0.0.2, port=4420 00:23:48.107 [2024-11-20 16:16:48.773504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.107 [2024-11-20 16:16:48.773514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.107 [2024-11-20 16:16:48.773537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:48.107 [2024-11-20 16:16:48.773544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:48.107 [2024-11-20 16:16:48.773551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:48.107 [2024-11-20 16:16:48.773556] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:48.107 [2024-11-20 16:16:48.773561] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:48.107 [2024-11-20 16:16:48.773564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:48.107 [2024-11-20 16:16:48.783252] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.107 [2024-11-20 16:16:48.783266] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.107 [2024-11-20 16:16:48.783271] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.107 [2024-11-20 16:16:48.783274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.107 [2024-11-20 16:16:48.783289] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:48.107 [2024-11-20 16:16:48.783508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.107 [2024-11-20 16:16:48.783522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8e390 with addr=10.0.0.2, port=4420 00:23:48.107 [2024-11-20 16:16:48.783529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.107 [2024-11-20 16:16:48.783540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.107 [2024-11-20 16:16:48.783557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:48.107 [2024-11-20 16:16:48.783564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:48.107 [2024-11-20 16:16:48.783571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:48.107 [2024-11-20 16:16:48.783576] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:48.107 [2024-11-20 16:16:48.783584] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:48.107 [2024-11-20 16:16:48.783588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.107 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:48.108 [2024-11-20 16:16:48.793321] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.108 [2024-11-20 16:16:48.793333] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.108 [2024-11-20 16:16:48.793337] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.108 [2024-11-20 16:16:48.793341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.108 [2024-11-20 16:16:48.793354] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:48.108 [2024-11-20 16:16:48.793557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.108 [2024-11-20 16:16:48.793570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8e390 with addr=10.0.0.2, port=4420 00:23:48.108 [2024-11-20 16:16:48.793578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.108 [2024-11-20 16:16:48.793589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.108 [2024-11-20 16:16:48.793612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:48.108 [2024-11-20 16:16:48.793619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:48.108 [2024-11-20 16:16:48.793626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:48.108 [2024-11-20 16:16:48.793631] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:48.108 [2024-11-20 16:16:48.793636] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:48.108 [2024-11-20 16:16:48.793640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:48.108 [2024-11-20 16:16:48.803385] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.108 [2024-11-20 16:16:48.803403] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.108 [2024-11-20 16:16:48.803407] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.108 [2024-11-20 16:16:48.803411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.108 [2024-11-20 16:16:48.803426] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:48.108 [2024-11-20 16:16:48.803540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.108 [2024-11-20 16:16:48.803551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8e390 with addr=10.0.0.2, port=4420 00:23:48.108 [2024-11-20 16:16:48.803559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.108 [2024-11-20 16:16:48.803569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.108 [2024-11-20 16:16:48.803579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:48.108 [2024-11-20 16:16:48.803585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:48.108 [2024-11-20 16:16:48.803591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:48.108 [2024-11-20 16:16:48.803597] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:48.108 [2024-11-20 16:16:48.803601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:48.108 [2024-11-20 16:16:48.803605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:48.108 [2024-11-20 16:16:48.813456] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.108 [2024-11-20 16:16:48.813467] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.108 [2024-11-20 16:16:48.813471] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.108 [2024-11-20 16:16:48.813475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.108 [2024-11-20 16:16:48.813488] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:48.108 [2024-11-20 16:16:48.813756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.108 [2024-11-20 16:16:48.813768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8e390 with addr=10.0.0.2, port=4420 00:23:48.108 [2024-11-20 16:16:48.813775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e390 is same with the state(6) to be set 00:23:48.108 [2024-11-20 16:16:48.813784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e390 (9): Bad file descriptor 00:23:48.108 [2024-11-20 16:16:48.813813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:48.108 [2024-11-20 16:16:48.813821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:48.108 [2024-11-20 16:16:48.813827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:48.108 [2024-11-20 16:16:48.813833] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:48.108 [2024-11-20 16:16:48.813837] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:48.108 [2024-11-20 16:16:48.813841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:48.108 [2024-11-20 16:16:48.819827] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:48.108 [2024-11-20 16:16:48.819843] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.108 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:48.368 16:16:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.369 16:16:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.304 [2024-11-20 16:16:50.111913] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:49.304 [2024-11-20 16:16:50.111933] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:49.304 [2024-11-20 16:16:50.111944] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:49.563 [2024-11-20 16:16:50.199229] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:49.822 [2024-11-20 16:16:50.499517] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:49.822 [2024-11-20 16:16:50.500158] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x20f5d70:1 started. 00:23:49.822 [2024-11-20 16:16:50.501817] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:49.822 [2024-11-20 16:16:50.501846] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.822 [2024-11-20 16:16:50.503168] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x20f5d70 was disconnected and freed. delete nvme_qpair. 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.822 request: 00:23:49.822 { 00:23:49.822 "name": "nvme", 00:23:49.822 "trtype": "tcp", 00:23:49.822 "traddr": "10.0.0.2", 00:23:49.822 "adrfam": "ipv4", 00:23:49.822 "trsvcid": "8009", 00:23:49.822 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:49.822 "wait_for_attach": true, 00:23:49.822 "method": "bdev_nvme_start_discovery", 00:23:49.822 "req_id": 1 00:23:49.822 } 00:23:49.822 Got JSON-RPC error response 00:23:49.822 response: 00:23:49.822 { 00:23:49.822 "code": -17, 00:23:49.822 "message": "File exists" 00:23:49.822 } 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.822 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.823 request: 00:23:49.823 { 00:23:49.823 "name": "nvme_second", 00:23:49.823 "trtype": "tcp", 00:23:49.823 "traddr": "10.0.0.2", 00:23:49.823 "adrfam": "ipv4", 00:23:49.823 "trsvcid": "8009", 00:23:49.823 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:49.823 "wait_for_attach": true, 00:23:49.823 "method": "bdev_nvme_start_discovery", 00:23:49.823 "req_id": 1 00:23:49.823 } 00:23:49.823 Got JSON-RPC error response 00:23:49.823 response: 00:23:49.823 { 00:23:49.823 "code": -17, 00:23:49.823 "message": "File exists" 00:23:49.823 } 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:49.823 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:50.082 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.083 16:16:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.019 [2024-11-20 16:16:51.745309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.019 [2024-11-20 16:16:51.745337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc8f20 with addr=10.0.0.2, port=8010 00:23:51.019 [2024-11-20 16:16:51.745352] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:51.019 [2024-11-20 16:16:51.745359] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:51.019 [2024-11-20 16:16:51.745365] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:51.957 [2024-11-20 16:16:52.747760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.957 [2024-11-20 16:16:52.747784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc8f20 with addr=10.0.0.2, port=8010 00:23:51.957 [2024-11-20 16:16:52.747795] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:51.957 [2024-11-20 16:16:52.747802] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:51.957 [2024-11-20 16:16:52.747808] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:53.335 [2024-11-20 16:16:53.749919] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:53.335 request: 00:23:53.335 { 00:23:53.335 "name": "nvme_second", 00:23:53.335 "trtype": "tcp", 00:23:53.335 "traddr": "10.0.0.2", 00:23:53.335 "adrfam": "ipv4", 00:23:53.335 "trsvcid": "8010", 00:23:53.335 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:53.335 "wait_for_attach": false, 00:23:53.335 "attach_timeout_ms": 3000, 00:23:53.335 "method": "bdev_nvme_start_discovery", 00:23:53.335 "req_id": 1 00:23:53.336 } 00:23:53.336 Got JSON-RPC error response 00:23:53.336 response: 00:23:53.336 { 00:23:53.336 "code": -110, 00:23:53.336 "message": "Connection timed out" 00:23:53.336 } 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2840157 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.336 rmmod nvme_tcp 00:23:53.336 rmmod nvme_fabrics 00:23:53.336 rmmod nvme_keyring 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2840136 ']' 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2840136 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2840136 ']' 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2840136 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840136 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840136' 00:23:53.336 killing process with pid 2840136 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2840136 00:23:53.336 16:16:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2840136 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.336 16:16:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.878 00:23:55.878 real 0m17.471s 00:23:55.878 user 0m20.989s 00:23:55.878 sys 0m5.859s 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.878 ************************************ 00:23:55.878 END TEST nvmf_host_discovery 00:23:55.878 ************************************ 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.878 ************************************ 00:23:55.878 START TEST nvmf_host_multipath_status 00:23:55.878 ************************************ 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:55.878 * Looking for test storage... 00:23:55.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:55.878 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.879 --rc genhtml_branch_coverage=1 00:23:55.879 --rc genhtml_function_coverage=1 00:23:55.879 --rc genhtml_legend=1 00:23:55.879 --rc geninfo_all_blocks=1 00:23:55.879 --rc geninfo_unexecuted_blocks=1 00:23:55.879 00:23:55.879 ' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.879 --rc genhtml_branch_coverage=1 00:23:55.879 --rc genhtml_function_coverage=1 00:23:55.879 --rc genhtml_legend=1 00:23:55.879 --rc geninfo_all_blocks=1 00:23:55.879 --rc geninfo_unexecuted_blocks=1 00:23:55.879 00:23:55.879 ' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.879 --rc genhtml_branch_coverage=1 00:23:55.879 --rc genhtml_function_coverage=1 00:23:55.879 --rc genhtml_legend=1 00:23:55.879 --rc geninfo_all_blocks=1 00:23:55.879 --rc geninfo_unexecuted_blocks=1 00:23:55.879 00:23:55.879 ' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.879 --rc genhtml_branch_coverage=1 00:23:55.879 --rc genhtml_function_coverage=1 00:23:55.879 --rc genhtml_legend=1 00:23:55.879 --rc geninfo_all_blocks=1 00:23:55.879 --rc geninfo_unexecuted_blocks=1 00:23:55.879 00:23:55.879 ' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.879 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.456 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:02.457 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:02.457 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:02.457 Found net devices under 0000:86:00.0: cvl_0_0 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:02.457 Found net devices under 0000:86:00.1: cvl_0_1 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:24:02.457 00:24:02.457 --- 10.0.0.2 ping statistics --- 00:24:02.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.457 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:02.457 00:24:02.457 --- 10.0.0.1 ping statistics --- 00:24:02.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.457 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2845246 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2845246 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2845246 ']' 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.457 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:02.457 [2024-11-20 16:17:02.427141] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:24:02.457 [2024-11-20 16:17:02.427184] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.458 [2024-11-20 16:17:02.504664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:02.458 [2024-11-20 16:17:02.545697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.458 [2024-11-20 16:17:02.545734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.458 [2024-11-20 16:17:02.545741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.458 [2024-11-20 16:17:02.545747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.458 [2024-11-20 16:17:02.545752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.458 [2024-11-20 16:17:02.546984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.458 [2024-11-20 16:17:02.546984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2845246 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.458 [2024-11-20 16:17:02.844625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.458 16:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:02.458 Malloc0 00:24:02.458 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:02.716 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.716 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.973 [2024-11-20 16:17:03.693602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.973 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:03.231 [2024-11-20 16:17:03.882041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2845499 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2845499 /var/tmp/bdevperf.sock 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2845499 ']' 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.231 16:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 16:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.489 16:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:03.489 16:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:03.746 16:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:04.004 Nvme0n1 00:24:04.004 16:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:04.261 Nvme0n1 00:24:04.261 16:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:04.262 16:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:06.790 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:06.790 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:06.790 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:06.790 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:07.723 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:07.723 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.723 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.723 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.982 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.982 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:07.982 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.982 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.240 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.240 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.240 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.240 16:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.500 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.500 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.500 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.500 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.758 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.017 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.017 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:09.017 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.274 16:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.531 16:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:10.463 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:10.463 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.463 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.463 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.721 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.721 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:10.721 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.721 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.979 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.979 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.979 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.979 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.238 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.238 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.238 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.238 16:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.238 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.238 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.238 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.238 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.496 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.496 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.496 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.496 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.754 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.754 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:11.754 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.012 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:12.271 16:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:13.205 16:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:13.205 16:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.205 16:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.205 16:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.464 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.464 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.464 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.464 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.723 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.981 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.981 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:13.981 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.981 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.239 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.239 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.239 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.239 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.496 16:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.496 16:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:14.496 16:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.752 16:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:14.752 16:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.125 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.383 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.383 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.383 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.383 16:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.383 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.383 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.383 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.383 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.640 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.640 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.640 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.640 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.898 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.898 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:16.898 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.898 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.156 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.156 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:17.156 16:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:17.414 16:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:17.671 16:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:18.604 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:18.604 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:18.604 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.604 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.862 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.119 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.119 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.119 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.119 16:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.377 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.377 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:19.377 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.377 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.635 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.635 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:19.635 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.635 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.893 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.893 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:19.893 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:19.893 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.150 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:21.083 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:21.083 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:21.083 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.083 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.341 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.341 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:21.341 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.341 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.599 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.599 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.599 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.599 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.857 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.857 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.857 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.857 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.115 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.115 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:22.115 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.115 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.373 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.373 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.373 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.373 16:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.373 16:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.373 16:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:22.632 16:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:22.632 16:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:22.890 16:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:23.148 16:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:24.081 16:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:24.081 16:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.081 16:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.081 16:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.339 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.339 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:24.339 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.339 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.613 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.613 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.613 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.613 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.891 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.891 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.892 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.892 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.892 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.892 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.892 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.892 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.178 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.178 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.178 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.178 16:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.449 16:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.449 16:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:25.449 16:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:25.707 16:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:25.707 16:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.081 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.340 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.340 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.340 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.340 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.340 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.340 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.340 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.340 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.598 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.598 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:27.598 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.598 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.856 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.856 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:27.856 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.856 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.113 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.113 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:28.114 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:28.371 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:28.627 16:17:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:29.560 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:29.560 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:29.560 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.560 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.819 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.077 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.077 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.077 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.077 16:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.335 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.335 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.335 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.335 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.593 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.593 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.593 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.593 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.850 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.850 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:30.850 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:30.850 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:31.107 16:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:32.038 16:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:32.038 16:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.038 16:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.038 16:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.295 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.295 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:32.295 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.295 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.551 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.551 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.551 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.551 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.808 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.808 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.808 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.808 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.066 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.066 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.066 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.066 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.324 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.324 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:33.324 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.324 16:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2845499 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2845499 ']' 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2845499 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.324 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845499 00:24:33.595 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:33.595 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:33.595 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845499' 00:24:33.595 killing process with pid 2845499 00:24:33.595 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2845499 00:24:33.595 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2845499 00:24:33.595 { 00:24:33.596 "results": [ 00:24:33.596 { 00:24:33.596 "job": "Nvme0n1", 00:24:33.596 "core_mask": "0x4", 00:24:33.596 "workload": "verify", 00:24:33.596 "status": "terminated", 00:24:33.596 "verify_range": { 00:24:33.596 "start": 0, 00:24:33.596 "length": 16384 00:24:33.596 }, 00:24:33.596 "queue_depth": 128, 00:24:33.596 "io_size": 4096, 00:24:33.596 "runtime": 29.00512, 00:24:33.596 "iops": 10488.286206021558, 00:24:33.596 "mibps": 40.96986799227171, 00:24:33.596 "io_failed": 0, 00:24:33.596 "io_timeout": 0, 00:24:33.596 "avg_latency_us": 12183.377047861903, 00:24:33.596 "min_latency_us": 308.09043478260867, 00:24:33.596 "max_latency_us": 3019898.88 00:24:33.596 } 00:24:33.596 ], 00:24:33.596 "core_count": 1 00:24:33.596 } 00:24:33.596 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2845499 00:24:33.596 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:33.596 [2024-11-20 16:17:03.953194] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:24:33.596 [2024-11-20 16:17:03.953250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845499 ] 00:24:33.596 [2024-11-20 16:17:04.028563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.596 [2024-11-20 16:17:04.069913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.596 Running I/O for 90 seconds... 00:24:33.596 11347.00 IOPS, 44.32 MiB/s [2024-11-20T15:17:34.433Z] 11367.00 IOPS, 44.40 MiB/s [2024-11-20T15:17:34.433Z] 11382.00 IOPS, 44.46 MiB/s [2024-11-20T15:17:34.433Z] 11384.75 IOPS, 44.47 MiB/s [2024-11-20T15:17:34.433Z] 11360.80 IOPS, 44.38 MiB/s [2024-11-20T15:17:34.433Z] 11351.17 IOPS, 44.34 MiB/s [2024-11-20T15:17:34.433Z] 11315.14 IOPS, 44.20 MiB/s [2024-11-20T15:17:34.433Z] 11305.88 IOPS, 44.16 MiB/s [2024-11-20T15:17:34.433Z] 11309.56 IOPS, 44.18 MiB/s [2024-11-20T15:17:34.433Z] 11319.40 IOPS, 44.22 MiB/s [2024-11-20T15:17:34.433Z] 11336.55 IOPS, 44.28 MiB/s [2024-11-20T15:17:34.433Z] 11331.50 IOPS, 44.26 MiB/s [2024-11-20T15:17:34.433Z] [2024-11-20 16:17:18.020122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.596 [2024-11-20 16:17:18.020159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.596 [2024-11-20 16:17:18.020219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.596 [2024-11-20 16:17:18.020241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.596 [2024-11-20 16:17:18.020261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.596 [2024-11-20 16:17:18.020281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.596 [2024-11-20 16:17:18.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.596 [2024-11-20 16:17:18.020640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.596 [2024-11-20 16:17:18.020653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.597 [2024-11-20 16:17:18.020775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.020983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.020990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.597 [2024-11-20 16:17:18.021471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.597 [2024-11-20 16:17:18.021486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.021980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.021995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.598 [2024-11-20 16:17:18.022254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:33.598 [2024-11-20 16:17:18.022270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.599 [2024-11-20 16:17:18.022826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.022979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.022997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.599 [2024-11-20 16:17:18.023205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.599 [2024-11-20 16:17:18.023223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.600 [2024-11-20 16:17:18.023230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.600 11194.31 IOPS, 43.73 MiB/s [2024-11-20T15:17:34.437Z] 10394.71 IOPS, 40.60 MiB/s [2024-11-20T15:17:34.437Z] 9701.73 IOPS, 37.90 MiB/s [2024-11-20T15:17:34.437Z] 9201.69 IOPS, 35.94 MiB/s [2024-11-20T15:17:34.437Z] 9325.35 IOPS, 36.43 MiB/s [2024-11-20T15:17:34.437Z] 9429.39 IOPS, 36.83 MiB/s [2024-11-20T15:17:34.437Z] 9580.58 IOPS, 37.42 MiB/s [2024-11-20T15:17:34.437Z] 9782.00 IOPS, 38.21 MiB/s [2024-11-20T15:17:34.437Z] 9952.76 IOPS, 38.88 MiB/s [2024-11-20T15:17:34.437Z] 10019.23 IOPS, 39.14 MiB/s [2024-11-20T15:17:34.437Z] 10068.57 IOPS, 39.33 MiB/s [2024-11-20T15:17:34.437Z] 10109.71 IOPS, 39.49 MiB/s [2024-11-20T15:17:34.437Z] 10229.04 IOPS, 39.96 MiB/s [2024-11-20T15:17:34.437Z] 10346.92 IOPS, 40.42 MiB/s [2024-11-20T15:17:34.437Z] [2024-11-20 16:17:31.838880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.838920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.838941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.838953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.838972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.838979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.838991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.838998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.600 [2024-11-20 16:17:31.839191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.600 [2024-11-20 16:17:31.839213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.600 [2024-11-20 16:17:31.839466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.600 [2024-11-20 16:17:31.839478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.601 [2024-11-20 16:17:31.839506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.601 [2024-11-20 16:17:31.839525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.839672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.839681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.601 [2024-11-20 16:17:31.841534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.601 [2024-11-20 16:17:31.841715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.601 [2024-11-20 16:17:31.841736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.601 [2024-11-20 16:17:31.841755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.601 [2024-11-20 16:17:31.841767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.601 [2024-11-20 16:17:31.841774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.841794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.841812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.841832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.841989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.841996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.842337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.842472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.842485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.842492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.843301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.843324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.843343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.602 [2024-11-20 16:17:31.843363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.602 [2024-11-20 16:17:31.843497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:33.602 [2024-11-20 16:17:31.843509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.843516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.843537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.843556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.843767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.843788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.843807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.843839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.843845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.844374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.844463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.603 [2024-11-20 16:17:31.844469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.845358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.845376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.845392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.845399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.845415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.603 [2024-11-20 16:17:31.845422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.603 [2024-11-20 16:17:31.845434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.845816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.845829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.845836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.604 [2024-11-20 16:17:31.846751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.604 [2024-11-20 16:17:31.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.604 [2024-11-20 16:17:31.846770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.846970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.846982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.846989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.847065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.847083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.847358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.847380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.847399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.847437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.847468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.847475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.848519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.848536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.848552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.605 [2024-11-20 16:17:31.848559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.848572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.848579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.848591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.848598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.848610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.848617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.605 [2024-11-20 16:17:31.848629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.605 [2024-11-20 16:17:31.848636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.848653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.848660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.848673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.848681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.848694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.848700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.848713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.848720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.848739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.849993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.850362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.606 [2024-11-20 16:17:31.850572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.850908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.861755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.861781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.861791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.861812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.861822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:33.606 [2024-11-20 16:17:31.861839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.606 [2024-11-20 16:17:31.861849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.861866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.861876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.861893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.861904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.861920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.861929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.861960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.861971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.862727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.862906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.862916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.863348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.863366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.863385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.863394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.863422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.863440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.863450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.865631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.865685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.865716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.607 [2024-11-20 16:17:31.865771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.607 [2024-11-20 16:17:31.865799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.607 [2024-11-20 16:17:31.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.865827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.865844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.865856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.865903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.865930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.865940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.865966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.865977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.868929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.868953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.608 [2024-11-20 16:17:31.868965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.870461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.870484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.608 [2024-11-20 16:17:31.870507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.608 [2024-11-20 16:17:31.870517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.870870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.870994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.609 [2024-11-20 16:17:31.871470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.609 [2024-11-20 16:17:31.871616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:33.609 [2024-11-20 16:17:31.871634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.871644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.871663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.871673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.874962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.874980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.874991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.875009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.610 [2024-11-20 16:17:31.875019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.875040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.875052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.875070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.875081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.610 [2024-11-20 16:17:31.875099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.610 [2024-11-20 16:17:31.875110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.875465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.875573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.877326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.877384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.877415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.877444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.877493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.877503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.611 [2024-11-20 16:17:31.878027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.878058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.878089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.878117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.878146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.878186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.611 [2024-11-20 16:17:31.878205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.611 [2024-11-20 16:17:31.878215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.878728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.878736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.880083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.880106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.612 [2024-11-20 16:17:31.880188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:33.612 [2024-11-20 16:17:31.880342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.612 [2024-11-20 16:17:31.880350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.880369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.880391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.880411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.880431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.880450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.880471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.880491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.880510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.880530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.880551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.880573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.880586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.880593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.613 [2024-11-20 16:17:31.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:33.613 [2024-11-20 16:17:31.882669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.613 [2024-11-20 16:17:31.882676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.882689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.882697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.882710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.882718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.883470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.883483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.883491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.884616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.884637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.884657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.884678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.614 [2024-11-20 16:17:31.884698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.614 [2024-11-20 16:17:31.884859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.614 [2024-11-20 16:17:31.884871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.884878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.884890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.884898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.884911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.884919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.884931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.884938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.884957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.884967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.884979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.884986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.884998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.885163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.885184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.885238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.885245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.615 [2024-11-20 16:17:31.886433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.615 [2024-11-20 16:17:31.886574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.615 [2024-11-20 16:17:31.886586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.886633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.886654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.886673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.886766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.886773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.887365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.887389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.887408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.887428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.887448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.887468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.887487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.887509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.887523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.887531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.888167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.888190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.616 [2024-11-20 16:17:31.888410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.616 [2024-11-20 16:17:31.888431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:33.616 [2024-11-20 16:17:31.888443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.888450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.888469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.888490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.888511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.888530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.888550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.888571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.888591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.888603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.888611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.889013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.889193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.889248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.889256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.890870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.890890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.617 [2024-11-20 16:17:31.890909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.890990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.891008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.891016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.891028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.891035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:33.617 [2024-11-20 16:17:31.891048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.617 [2024-11-20 16:17:31.891056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.891095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.891277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.891371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.891378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.892811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.892835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.892856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.892878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.892898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.892939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.892964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.892984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.892998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.618 [2024-11-20 16:17:31.893164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.893183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.893203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:33.618 [2024-11-20 16:17:31.893215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.618 [2024-11-20 16:17:31.893222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.893725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.893894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.893902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.894907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.894924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.894941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.894954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.894967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.894974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.894986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.894993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.895014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.895034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.895072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.619 [2024-11-20 16:17:31.895092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.895113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.895133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.619 [2024-11-20 16:17:31.895152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:33.619 [2024-11-20 16:17:31.895166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.620 [2024-11-20 16:17:31.895173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:33.620 [2024-11-20 16:17:31.895190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.620 [2024-11-20 16:17:31.895198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:33.620 [2024-11-20 16:17:31.895211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.620 [2024-11-20 16:17:31.895217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:33.620 [2024-11-20 16:17:31.895230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.620 [2024-11-20 16:17:31.895237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:33.620 10425.70 IOPS, 40.73 MiB/s [2024-11-20T15:17:34.457Z] 10456.21 IOPS, 40.84 MiB/s [2024-11-20T15:17:34.457Z] Received shutdown signal, test time was about 29.005783 seconds 00:24:33.620 00:24:33.620 Latency(us) 00:24:33.620 [2024-11-20T15:17:34.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.620 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.620 Verification LBA range: start 0x0 length 0x4000 00:24:33.620 Nvme0n1 : 29.01 10488.29 40.97 0.00 0.00 12183.38 308.09 3019898.88 00:24:33.620 [2024-11-20T15:17:34.457Z] =================================================================================================================== 00:24:33.620 [2024-11-20T15:17:34.457Z] Total : 10488.29 40.97 0.00 0.00 12183.38 308.09 3019898.88 00:24:33.620 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.878 rmmod nvme_tcp 00:24:33.878 rmmod nvme_fabrics 00:24:33.878 rmmod nvme_keyring 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2845246 ']' 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2845246 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2845246 ']' 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2845246 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845246 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845246' 00:24:33.878 killing process with pid 2845246 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2845246 00:24:33.878 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2845246 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.137 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.673 00:24:36.673 real 0m40.731s 00:24:36.673 user 1m50.551s 00:24:36.673 sys 0m11.621s 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:36.673 ************************************ 00:24:36.673 END TEST nvmf_host_multipath_status 00:24:36.673 ************************************ 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.673 16:17:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.673 ************************************ 00:24:36.673 START TEST nvmf_discovery_remove_ifc 00:24:36.673 ************************************ 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:36.673 * Looking for test storage... 00:24:36.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.673 --rc genhtml_branch_coverage=1 00:24:36.673 --rc genhtml_function_coverage=1 00:24:36.673 --rc genhtml_legend=1 00:24:36.673 --rc geninfo_all_blocks=1 00:24:36.673 --rc geninfo_unexecuted_blocks=1 00:24:36.673 00:24:36.673 ' 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.673 --rc genhtml_branch_coverage=1 00:24:36.673 --rc genhtml_function_coverage=1 00:24:36.673 --rc genhtml_legend=1 00:24:36.673 --rc geninfo_all_blocks=1 00:24:36.673 --rc geninfo_unexecuted_blocks=1 00:24:36.673 00:24:36.673 ' 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.673 --rc genhtml_branch_coverage=1 00:24:36.673 --rc genhtml_function_coverage=1 00:24:36.673 --rc genhtml_legend=1 00:24:36.673 --rc geninfo_all_blocks=1 00:24:36.673 --rc geninfo_unexecuted_blocks=1 00:24:36.673 00:24:36.673 ' 00:24:36.673 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.673 --rc genhtml_branch_coverage=1 00:24:36.673 --rc genhtml_function_coverage=1 00:24:36.674 --rc genhtml_legend=1 00:24:36.674 --rc geninfo_all_blocks=1 00:24:36.674 --rc geninfo_unexecuted_blocks=1 00:24:36.674 00:24:36.674 ' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.674 16:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:43.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:43.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.254 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:43.255 Found net devices under 0000:86:00.0: cvl_0_0 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:43.255 Found net devices under 0000:86:00.1: cvl_0_1 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.255 16:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:24:43.255 00:24:43.255 --- 10.0.0.2 ping statistics --- 00:24:43.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.255 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:43.255 00:24:43.255 --- 10.0.0.1 ping statistics --- 00:24:43.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.255 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2854266 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2854266 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2854266 ']' 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.255 [2024-11-20 16:17:43.246266] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:24:43.255 [2024-11-20 16:17:43.246311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.255 [2024-11-20 16:17:43.326265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.255 [2024-11-20 16:17:43.366915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.255 [2024-11-20 16:17:43.366957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.255 [2024-11-20 16:17:43.366965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.255 [2024-11-20 16:17:43.366970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.255 [2024-11-20 16:17:43.366976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.255 [2024-11-20 16:17:43.367510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.255 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.255 [2024-11-20 16:17:43.511122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.255 [2024-11-20 16:17:43.519304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:43.256 null0 00:24:43.256 [2024-11-20 16:17:43.551283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2854285 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2854285 /tmp/host.sock 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2854285 ']' 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:43.256 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.256 [2024-11-20 16:17:43.619705] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:24:43.256 [2024-11-20 16:17:43.619748] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854285 ] 00:24:43.256 [2024-11-20 16:17:43.692811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.256 [2024-11-20 16:17:43.735503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.256 16:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.188 [2024-11-20 16:17:44.877351] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.188 [2024-11-20 16:17:44.877369] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.188 [2024-11-20 16:17:44.877385] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.188 [2024-11-20 16:17:45.003777] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:44.445 [2024-11-20 16:17:45.058373] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:44.445 [2024-11-20 16:17:45.059189] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15a4a10:1 started. 00:24:44.445 [2024-11-20 16:17:45.060551] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:44.445 [2024-11-20 16:17:45.060591] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:44.445 [2024-11-20 16:17:45.060620] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:44.445 [2024-11-20 16:17:45.060632] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.445 [2024-11-20 16:17:45.060650] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.445 [2024-11-20 16:17:45.066530] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15a4a10 was disconnected and freed. delete nvme_qpair. 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.445 16:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.818 16:17:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:46.751 16:17:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:47.685 16:17:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.617 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.889 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:48.889 16:17:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.823 [2024-11-20 16:17:50.502150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:49.823 [2024-11-20 16:17:50.502193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.823 [2024-11-20 16:17:50.502209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.823 [2024-11-20 16:17:50.502218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.823 [2024-11-20 16:17:50.502225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.823 [2024-11-20 16:17:50.502232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.823 [2024-11-20 16:17:50.502239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.823 [2024-11-20 16:17:50.502246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.823 [2024-11-20 16:17:50.502253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.823 [2024-11-20 16:17:50.502260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.823 [2024-11-20 16:17:50.502267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.823 [2024-11-20 16:17:50.502273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581220 is same with the state(6) to be set 00:24:49.823 [2024-11-20 16:17:50.512174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581220 (9): Bad file descriptor 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:49.823 16:17:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.823 [2024-11-20 16:17:50.522209] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:49.823 [2024-11-20 16:17:50.522223] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:49.823 [2024-11-20 16:17:50.522228] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:49.823 [2024-11-20 16:17:50.522234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:49.823 [2024-11-20 16:17:50.522254] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.759 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.759 [2024-11-20 16:17:51.536868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:50.759 [2024-11-20 16:17:51.536946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1581220 with addr=10.0.0.2, port=4420 00:24:50.759 [2024-11-20 16:17:51.537003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581220 is same with the state(6) to be set 00:24:50.759 [2024-11-20 16:17:51.537061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1581220 (9): Bad file descriptor 00:24:50.759 [2024-11-20 16:17:51.538005] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:50.759 [2024-11-20 16:17:51.538078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:50.759 [2024-11-20 16:17:51.538101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:50.759 [2024-11-20 16:17:51.538125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:50.759 [2024-11-20 16:17:51.538145] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:50.760 [2024-11-20 16:17:51.538161] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:50.760 [2024-11-20 16:17:51.538174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:50.760 [2024-11-20 16:17:51.538194] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:50.760 [2024-11-20 16:17:51.538209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:50.760 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.760 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:50.760 16:17:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:52.133 [2024-11-20 16:17:52.540732] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:52.133 [2024-11-20 16:17:52.540753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:52.133 [2024-11-20 16:17:52.540765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:52.133 [2024-11-20 16:17:52.540771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:52.133 [2024-11-20 16:17:52.540778] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:52.133 [2024-11-20 16:17:52.540785] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:52.133 [2024-11-20 16:17:52.540790] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:52.133 [2024-11-20 16:17:52.540794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:52.133 [2024-11-20 16:17:52.540818] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:52.133 [2024-11-20 16:17:52.540837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.133 [2024-11-20 16:17:52.540846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.133 [2024-11-20 16:17:52.540856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.133 [2024-11-20 16:17:52.540862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.133 [2024-11-20 16:17:52.540869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.133 [2024-11-20 16:17:52.540876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.133 [2024-11-20 16:17:52.540883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.133 [2024-11-20 16:17:52.540889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.133 [2024-11-20 16:17:52.540900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.133 [2024-11-20 16:17:52.540906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.133 [2024-11-20 16:17:52.540913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:52.133 [2024-11-20 16:17:52.541371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570900 (9): Bad file descriptor 00:24:52.133 [2024-11-20 16:17:52.542382] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:52.133 [2024-11-20 16:17:52.542392] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:52.133 16:17:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:53.067 16:17:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.999 [2024-11-20 16:17:54.593425] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:53.999 [2024-11-20 16:17:54.593443] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:54.000 [2024-11-20 16:17:54.593455] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:54.000 [2024-11-20 16:17:54.722850] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.000 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.000 [2024-11-20 16:17:54.823523] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:54.000 [2024-11-20 16:17:54.824194] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1575830:1 started. 00:24:54.000 [2024-11-20 16:17:54.825270] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:54.000 [2024-11-20 16:17:54.825302] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:54.000 [2024-11-20 16:17:54.825319] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:54.000 [2024-11-20 16:17:54.825332] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:54.000 [2024-11-20 16:17:54.825339] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:54.257 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:54.257 16:17:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.257 [2024-11-20 16:17:54.872512] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1575830 was disconnected and freed. delete nvme_qpair. 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2854285 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2854285 ']' 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2854285 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854285 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854285' 00:24:55.241 killing process with pid 2854285 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2854285 00:24:55.241 16:17:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2854285 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.519 rmmod nvme_tcp 00:24:55.519 rmmod nvme_fabrics 00:24:55.519 rmmod nvme_keyring 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2854266 ']' 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2854266 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2854266 ']' 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2854266 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854266 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854266' 00:24:55.519 killing process with pid 2854266 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2854266 00:24:55.519 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2854266 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.778 16:17:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.683 00:24:57.683 real 0m21.434s 00:24:57.683 user 0m26.651s 00:24:57.683 sys 0m5.853s 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.683 ************************************ 00:24:57.683 END TEST nvmf_discovery_remove_ifc 00:24:57.683 ************************************ 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.683 16:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.943 ************************************ 00:24:57.943 START TEST nvmf_identify_kernel_target 00:24:57.943 ************************************ 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:57.943 * Looking for test storage... 00:24:57.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.943 --rc genhtml_branch_coverage=1 00:24:57.943 --rc genhtml_function_coverage=1 00:24:57.943 --rc genhtml_legend=1 00:24:57.943 --rc geninfo_all_blocks=1 00:24:57.943 --rc geninfo_unexecuted_blocks=1 00:24:57.943 00:24:57.943 ' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.943 --rc genhtml_branch_coverage=1 00:24:57.943 --rc genhtml_function_coverage=1 00:24:57.943 --rc genhtml_legend=1 00:24:57.943 --rc geninfo_all_blocks=1 00:24:57.943 --rc geninfo_unexecuted_blocks=1 00:24:57.943 00:24:57.943 ' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.943 --rc genhtml_branch_coverage=1 00:24:57.943 --rc genhtml_function_coverage=1 00:24:57.943 --rc genhtml_legend=1 00:24:57.943 --rc geninfo_all_blocks=1 00:24:57.943 --rc geninfo_unexecuted_blocks=1 00:24:57.943 00:24:57.943 ' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.943 --rc genhtml_branch_coverage=1 00:24:57.943 --rc genhtml_function_coverage=1 00:24:57.943 --rc genhtml_legend=1 00:24:57.943 --rc geninfo_all_blocks=1 00:24:57.943 --rc geninfo_unexecuted_blocks=1 00:24:57.943 00:24:57.943 ' 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:57.943 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.944 16:17:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:04.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:04.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.514 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:04.515 Found net devices under 0000:86:00.0: cvl_0_0 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:04.515 Found net devices under 0000:86:00.1: cvl_0_1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:04.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:25:04.515 00:25:04.515 --- 10.0.0.2 ping statistics --- 00:25:04.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.515 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:04.515 00:25:04.515 --- 10.0.0.1 ping statistics --- 00:25:04.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.515 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:04.515 16:18:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:07.049 Waiting for block devices as requested 00:25:07.049 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:07.049 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:07.049 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:07.049 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:07.049 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:07.049 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:07.308 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:07.308 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:07.308 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:07.567 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:07.567 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:07.567 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:07.567 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:07.825 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:07.825 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:07.825 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:08.084 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:08.084 No valid GPT data, bailing 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:08.084 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:08.085 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:08.085 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:08.085 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:08.085 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:08.085 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:08.085 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:08.345 00:25:08.345 Discovery Log Number of Records 2, Generation counter 2 00:25:08.345 =====Discovery Log Entry 0====== 00:25:08.345 trtype: tcp 00:25:08.345 adrfam: ipv4 00:25:08.345 subtype: current discovery subsystem 00:25:08.345 treq: not specified, sq flow control disable supported 00:25:08.345 portid: 1 00:25:08.345 trsvcid: 4420 00:25:08.345 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:08.345 traddr: 10.0.0.1 00:25:08.345 eflags: none 00:25:08.345 sectype: none 00:25:08.345 =====Discovery Log Entry 1====== 00:25:08.345 trtype: tcp 00:25:08.345 adrfam: ipv4 00:25:08.345 subtype: nvme subsystem 00:25:08.345 treq: not specified, sq flow control disable supported 00:25:08.345 portid: 1 00:25:08.345 trsvcid: 4420 00:25:08.345 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:08.345 traddr: 10.0.0.1 00:25:08.345 eflags: none 00:25:08.345 sectype: none 00:25:08.345 16:18:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:08.345 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:08.345 ===================================================== 00:25:08.345 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:08.345 ===================================================== 00:25:08.345 Controller Capabilities/Features 00:25:08.345 ================================ 00:25:08.345 Vendor ID: 0000 00:25:08.345 Subsystem Vendor ID: 0000 00:25:08.345 Serial Number: 1857632e8e3fba355f65 00:25:08.345 Model Number: Linux 00:25:08.345 Firmware Version: 6.8.9-20 00:25:08.345 Recommended Arb Burst: 0 00:25:08.345 IEEE OUI Identifier: 00 00 00 00:25:08.346 Multi-path I/O 00:25:08.346 May have multiple subsystem ports: No 00:25:08.346 May have multiple controllers: No 00:25:08.346 Associated with SR-IOV VF: No 00:25:08.346 Max Data Transfer Size: Unlimited 00:25:08.346 Max Number of Namespaces: 0 00:25:08.346 Max Number of I/O Queues: 1024 00:25:08.346 NVMe Specification Version (VS): 1.3 00:25:08.346 NVMe Specification Version (Identify): 1.3 00:25:08.346 Maximum Queue Entries: 1024 00:25:08.346 Contiguous Queues Required: No 00:25:08.346 Arbitration Mechanisms Supported 00:25:08.346 Weighted Round Robin: Not Supported 00:25:08.346 Vendor Specific: Not Supported 00:25:08.346 Reset Timeout: 7500 ms 00:25:08.346 Doorbell Stride: 4 bytes 00:25:08.346 NVM Subsystem Reset: Not Supported 00:25:08.346 Command Sets Supported 00:25:08.346 NVM Command Set: Supported 00:25:08.346 Boot Partition: Not Supported 00:25:08.346 Memory Page Size Minimum: 4096 bytes 00:25:08.346 Memory Page Size Maximum: 4096 bytes 00:25:08.346 Persistent Memory Region: Not Supported 00:25:08.346 Optional Asynchronous Events Supported 00:25:08.346 Namespace Attribute Notices: Not Supported 00:25:08.346 Firmware Activation Notices: Not Supported 00:25:08.346 ANA Change Notices: Not Supported 00:25:08.346 PLE Aggregate Log Change Notices: Not Supported 00:25:08.346 LBA Status Info Alert Notices: Not Supported 00:25:08.346 EGE Aggregate Log Change Notices: Not Supported 00:25:08.346 Normal NVM Subsystem Shutdown event: Not Supported 00:25:08.346 Zone Descriptor Change Notices: Not Supported 00:25:08.346 Discovery Log Change Notices: Supported 00:25:08.346 Controller Attributes 00:25:08.346 128-bit Host Identifier: Not Supported 00:25:08.346 Non-Operational Permissive Mode: Not Supported 00:25:08.346 NVM Sets: Not Supported 00:25:08.346 Read Recovery Levels: Not Supported 00:25:08.346 Endurance Groups: Not Supported 00:25:08.346 Predictable Latency Mode: Not Supported 00:25:08.346 Traffic Based Keep ALive: Not Supported 00:25:08.346 Namespace Granularity: Not Supported 00:25:08.346 SQ Associations: Not Supported 00:25:08.346 UUID List: Not Supported 00:25:08.346 Multi-Domain Subsystem: Not Supported 00:25:08.346 Fixed Capacity Management: Not Supported 00:25:08.346 Variable Capacity Management: Not Supported 00:25:08.346 Delete Endurance Group: Not Supported 00:25:08.346 Delete NVM Set: Not Supported 00:25:08.346 Extended LBA Formats Supported: Not Supported 00:25:08.346 Flexible Data Placement Supported: Not Supported 00:25:08.346 00:25:08.346 Controller Memory Buffer Support 00:25:08.346 ================================ 00:25:08.346 Supported: No 00:25:08.346 00:25:08.346 Persistent Memory Region Support 00:25:08.346 ================================ 00:25:08.346 Supported: No 00:25:08.346 00:25:08.346 Admin Command Set Attributes 00:25:08.346 ============================ 00:25:08.346 Security Send/Receive: Not Supported 00:25:08.346 Format NVM: Not Supported 00:25:08.346 Firmware Activate/Download: Not Supported 00:25:08.346 Namespace Management: Not Supported 00:25:08.346 Device Self-Test: Not Supported 00:25:08.346 Directives: Not Supported 00:25:08.346 NVMe-MI: Not Supported 00:25:08.346 Virtualization Management: Not Supported 00:25:08.346 Doorbell Buffer Config: Not Supported 00:25:08.346 Get LBA Status Capability: Not Supported 00:25:08.346 Command & Feature Lockdown Capability: Not Supported 00:25:08.346 Abort Command Limit: 1 00:25:08.346 Async Event Request Limit: 1 00:25:08.346 Number of Firmware Slots: N/A 00:25:08.346 Firmware Slot 1 Read-Only: N/A 00:25:08.346 Firmware Activation Without Reset: N/A 00:25:08.346 Multiple Update Detection Support: N/A 00:25:08.346 Firmware Update Granularity: No Information Provided 00:25:08.346 Per-Namespace SMART Log: No 00:25:08.346 Asymmetric Namespace Access Log Page: Not Supported 00:25:08.346 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:08.346 Command Effects Log Page: Not Supported 00:25:08.346 Get Log Page Extended Data: Supported 00:25:08.346 Telemetry Log Pages: Not Supported 00:25:08.346 Persistent Event Log Pages: Not Supported 00:25:08.346 Supported Log Pages Log Page: May Support 00:25:08.346 Commands Supported & Effects Log Page: Not Supported 00:25:08.346 Feature Identifiers & Effects Log Page:May Support 00:25:08.346 NVMe-MI Commands & Effects Log Page: May Support 00:25:08.346 Data Area 4 for Telemetry Log: Not Supported 00:25:08.346 Error Log Page Entries Supported: 1 00:25:08.346 Keep Alive: Not Supported 00:25:08.346 00:25:08.346 NVM Command Set Attributes 00:25:08.346 ========================== 00:25:08.346 Submission Queue Entry Size 00:25:08.346 Max: 1 00:25:08.346 Min: 1 00:25:08.346 Completion Queue Entry Size 00:25:08.346 Max: 1 00:25:08.346 Min: 1 00:25:08.346 Number of Namespaces: 0 00:25:08.346 Compare Command: Not Supported 00:25:08.346 Write Uncorrectable Command: Not Supported 00:25:08.346 Dataset Management Command: Not Supported 00:25:08.346 Write Zeroes Command: Not Supported 00:25:08.346 Set Features Save Field: Not Supported 00:25:08.346 Reservations: Not Supported 00:25:08.346 Timestamp: Not Supported 00:25:08.346 Copy: Not Supported 00:25:08.346 Volatile Write Cache: Not Present 00:25:08.346 Atomic Write Unit (Normal): 1 00:25:08.346 Atomic Write Unit (PFail): 1 00:25:08.346 Atomic Compare & Write Unit: 1 00:25:08.346 Fused Compare & Write: Not Supported 00:25:08.346 Scatter-Gather List 00:25:08.346 SGL Command Set: Supported 00:25:08.347 SGL Keyed: Not Supported 00:25:08.347 SGL Bit Bucket Descriptor: Not Supported 00:25:08.347 SGL Metadata Pointer: Not Supported 00:25:08.347 Oversized SGL: Not Supported 00:25:08.347 SGL Metadata Address: Not Supported 00:25:08.347 SGL Offset: Supported 00:25:08.347 Transport SGL Data Block: Not Supported 00:25:08.347 Replay Protected Memory Block: Not Supported 00:25:08.347 00:25:08.347 Firmware Slot Information 00:25:08.347 ========================= 00:25:08.347 Active slot: 0 00:25:08.347 00:25:08.347 00:25:08.347 Error Log 00:25:08.347 ========= 00:25:08.347 00:25:08.347 Active Namespaces 00:25:08.347 ================= 00:25:08.347 Discovery Log Page 00:25:08.347 ================== 00:25:08.347 Generation Counter: 2 00:25:08.347 Number of Records: 2 00:25:08.347 Record Format: 0 00:25:08.347 00:25:08.347 Discovery Log Entry 0 00:25:08.347 ---------------------- 00:25:08.347 Transport Type: 3 (TCP) 00:25:08.347 Address Family: 1 (IPv4) 00:25:08.347 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:08.347 Entry Flags: 00:25:08.347 Duplicate Returned Information: 0 00:25:08.347 Explicit Persistent Connection Support for Discovery: 0 00:25:08.347 Transport Requirements: 00:25:08.347 Secure Channel: Not Specified 00:25:08.347 Port ID: 1 (0x0001) 00:25:08.347 Controller ID: 65535 (0xffff) 00:25:08.347 Admin Max SQ Size: 32 00:25:08.347 Transport Service Identifier: 4420 00:25:08.347 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:08.347 Transport Address: 10.0.0.1 00:25:08.347 Discovery Log Entry 1 00:25:08.347 ---------------------- 00:25:08.347 Transport Type: 3 (TCP) 00:25:08.347 Address Family: 1 (IPv4) 00:25:08.347 Subsystem Type: 2 (NVM Subsystem) 00:25:08.347 Entry Flags: 00:25:08.347 Duplicate Returned Information: 0 00:25:08.347 Explicit Persistent Connection Support for Discovery: 0 00:25:08.347 Transport Requirements: 00:25:08.347 Secure Channel: Not Specified 00:25:08.347 Port ID: 1 (0x0001) 00:25:08.347 Controller ID: 65535 (0xffff) 00:25:08.347 Admin Max SQ Size: 32 00:25:08.347 Transport Service Identifier: 4420 00:25:08.347 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:08.347 Transport Address: 10.0.0.1 00:25:08.347 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:08.347 get_feature(0x01) failed 00:25:08.347 get_feature(0x02) failed 00:25:08.347 get_feature(0x04) failed 00:25:08.347 ===================================================== 00:25:08.347 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:08.347 ===================================================== 00:25:08.347 Controller Capabilities/Features 00:25:08.347 ================================ 00:25:08.347 Vendor ID: 0000 00:25:08.347 Subsystem Vendor ID: 0000 00:25:08.347 Serial Number: f0897cb61b945d4e38d8 00:25:08.347 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:08.347 Firmware Version: 6.8.9-20 00:25:08.347 Recommended Arb Burst: 6 00:25:08.347 IEEE OUI Identifier: 00 00 00 00:25:08.347 Multi-path I/O 00:25:08.347 May have multiple subsystem ports: Yes 00:25:08.347 May have multiple controllers: Yes 00:25:08.347 Associated with SR-IOV VF: No 00:25:08.347 Max Data Transfer Size: Unlimited 00:25:08.347 Max Number of Namespaces: 1024 00:25:08.347 Max Number of I/O Queues: 128 00:25:08.347 NVMe Specification Version (VS): 1.3 00:25:08.347 NVMe Specification Version (Identify): 1.3 00:25:08.347 Maximum Queue Entries: 1024 00:25:08.347 Contiguous Queues Required: No 00:25:08.347 Arbitration Mechanisms Supported 00:25:08.347 Weighted Round Robin: Not Supported 00:25:08.347 Vendor Specific: Not Supported 00:25:08.347 Reset Timeout: 7500 ms 00:25:08.347 Doorbell Stride: 4 bytes 00:25:08.347 NVM Subsystem Reset: Not Supported 00:25:08.347 Command Sets Supported 00:25:08.347 NVM Command Set: Supported 00:25:08.347 Boot Partition: Not Supported 00:25:08.347 Memory Page Size Minimum: 4096 bytes 00:25:08.347 Memory Page Size Maximum: 4096 bytes 00:25:08.347 Persistent Memory Region: Not Supported 00:25:08.347 Optional Asynchronous Events Supported 00:25:08.347 Namespace Attribute Notices: Supported 00:25:08.347 Firmware Activation Notices: Not Supported 00:25:08.347 ANA Change Notices: Supported 00:25:08.347 PLE Aggregate Log Change Notices: Not Supported 00:25:08.347 LBA Status Info Alert Notices: Not Supported 00:25:08.347 EGE Aggregate Log Change Notices: Not Supported 00:25:08.347 Normal NVM Subsystem Shutdown event: Not Supported 00:25:08.347 Zone Descriptor Change Notices: Not Supported 00:25:08.347 Discovery Log Change Notices: Not Supported 00:25:08.347 Controller Attributes 00:25:08.347 128-bit Host Identifier: Supported 00:25:08.347 Non-Operational Permissive Mode: Not Supported 00:25:08.347 NVM Sets: Not Supported 00:25:08.347 Read Recovery Levels: Not Supported 00:25:08.347 Endurance Groups: Not Supported 00:25:08.347 Predictable Latency Mode: Not Supported 00:25:08.347 Traffic Based Keep ALive: Supported 00:25:08.347 Namespace Granularity: Not Supported 00:25:08.347 SQ Associations: Not Supported 00:25:08.348 UUID List: Not Supported 00:25:08.348 Multi-Domain Subsystem: Not Supported 00:25:08.348 Fixed Capacity Management: Not Supported 00:25:08.348 Variable Capacity Management: Not Supported 00:25:08.348 Delete Endurance Group: Not Supported 00:25:08.348 Delete NVM Set: Not Supported 00:25:08.348 Extended LBA Formats Supported: Not Supported 00:25:08.348 Flexible Data Placement Supported: Not Supported 00:25:08.348 00:25:08.348 Controller Memory Buffer Support 00:25:08.348 ================================ 00:25:08.348 Supported: No 00:25:08.348 00:25:08.348 Persistent Memory Region Support 00:25:08.348 ================================ 00:25:08.348 Supported: No 00:25:08.348 00:25:08.348 Admin Command Set Attributes 00:25:08.348 ============================ 00:25:08.348 Security Send/Receive: Not Supported 00:25:08.348 Format NVM: Not Supported 00:25:08.348 Firmware Activate/Download: Not Supported 00:25:08.348 Namespace Management: Not Supported 00:25:08.348 Device Self-Test: Not Supported 00:25:08.348 Directives: Not Supported 00:25:08.348 NVMe-MI: Not Supported 00:25:08.348 Virtualization Management: Not Supported 00:25:08.348 Doorbell Buffer Config: Not Supported 00:25:08.348 Get LBA Status Capability: Not Supported 00:25:08.348 Command & Feature Lockdown Capability: Not Supported 00:25:08.348 Abort Command Limit: 4 00:25:08.348 Async Event Request Limit: 4 00:25:08.348 Number of Firmware Slots: N/A 00:25:08.348 Firmware Slot 1 Read-Only: N/A 00:25:08.348 Firmware Activation Without Reset: N/A 00:25:08.348 Multiple Update Detection Support: N/A 00:25:08.348 Firmware Update Granularity: No Information Provided 00:25:08.348 Per-Namespace SMART Log: Yes 00:25:08.348 Asymmetric Namespace Access Log Page: Supported 00:25:08.348 ANA Transition Time : 10 sec 00:25:08.348 00:25:08.348 Asymmetric Namespace Access Capabilities 00:25:08.348 ANA Optimized State : Supported 00:25:08.348 ANA Non-Optimized State : Supported 00:25:08.348 ANA Inaccessible State : Supported 00:25:08.348 ANA Persistent Loss State : Supported 00:25:08.348 ANA Change State : Supported 00:25:08.348 ANAGRPID is not changed : No 00:25:08.348 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:08.348 00:25:08.348 ANA Group Identifier Maximum : 128 00:25:08.348 Number of ANA Group Identifiers : 128 00:25:08.348 Max Number of Allowed Namespaces : 1024 00:25:08.348 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:08.348 Command Effects Log Page: Supported 00:25:08.348 Get Log Page Extended Data: Supported 00:25:08.348 Telemetry Log Pages: Not Supported 00:25:08.348 Persistent Event Log Pages: Not Supported 00:25:08.348 Supported Log Pages Log Page: May Support 00:25:08.348 Commands Supported & Effects Log Page: Not Supported 00:25:08.348 Feature Identifiers & Effects Log Page:May Support 00:25:08.348 NVMe-MI Commands & Effects Log Page: May Support 00:25:08.348 Data Area 4 for Telemetry Log: Not Supported 00:25:08.348 Error Log Page Entries Supported: 128 00:25:08.348 Keep Alive: Supported 00:25:08.348 Keep Alive Granularity: 1000 ms 00:25:08.348 00:25:08.348 NVM Command Set Attributes 00:25:08.348 ========================== 00:25:08.348 Submission Queue Entry Size 00:25:08.348 Max: 64 00:25:08.348 Min: 64 00:25:08.348 Completion Queue Entry Size 00:25:08.348 Max: 16 00:25:08.348 Min: 16 00:25:08.348 Number of Namespaces: 1024 00:25:08.348 Compare Command: Not Supported 00:25:08.348 Write Uncorrectable Command: Not Supported 00:25:08.348 Dataset Management Command: Supported 00:25:08.348 Write Zeroes Command: Supported 00:25:08.348 Set Features Save Field: Not Supported 00:25:08.348 Reservations: Not Supported 00:25:08.348 Timestamp: Not Supported 00:25:08.348 Copy: Not Supported 00:25:08.348 Volatile Write Cache: Present 00:25:08.348 Atomic Write Unit (Normal): 1 00:25:08.348 Atomic Write Unit (PFail): 1 00:25:08.348 Atomic Compare & Write Unit: 1 00:25:08.348 Fused Compare & Write: Not Supported 00:25:08.348 Scatter-Gather List 00:25:08.348 SGL Command Set: Supported 00:25:08.348 SGL Keyed: Not Supported 00:25:08.348 SGL Bit Bucket Descriptor: Not Supported 00:25:08.348 SGL Metadata Pointer: Not Supported 00:25:08.348 Oversized SGL: Not Supported 00:25:08.348 SGL Metadata Address: Not Supported 00:25:08.348 SGL Offset: Supported 00:25:08.348 Transport SGL Data Block: Not Supported 00:25:08.348 Replay Protected Memory Block: Not Supported 00:25:08.348 00:25:08.348 Firmware Slot Information 00:25:08.348 ========================= 00:25:08.348 Active slot: 0 00:25:08.348 00:25:08.348 Asymmetric Namespace Access 00:25:08.348 =========================== 00:25:08.348 Change Count : 0 00:25:08.348 Number of ANA Group Descriptors : 1 00:25:08.348 ANA Group Descriptor : 0 00:25:08.348 ANA Group ID : 1 00:25:08.348 Number of NSID Values : 1 00:25:08.348 Change Count : 0 00:25:08.348 ANA State : 1 00:25:08.348 Namespace Identifier : 1 00:25:08.348 00:25:08.348 Commands Supported and Effects 00:25:08.348 ============================== 00:25:08.348 Admin Commands 00:25:08.348 -------------- 00:25:08.348 Get Log Page (02h): Supported 00:25:08.348 Identify (06h): Supported 00:25:08.349 Abort (08h): Supported 00:25:08.349 Set Features (09h): Supported 00:25:08.349 Get Features (0Ah): Supported 00:25:08.349 Asynchronous Event Request (0Ch): Supported 00:25:08.349 Keep Alive (18h): Supported 00:25:08.349 I/O Commands 00:25:08.349 ------------ 00:25:08.349 Flush (00h): Supported 00:25:08.349 Write (01h): Supported LBA-Change 00:25:08.349 Read (02h): Supported 00:25:08.349 Write Zeroes (08h): Supported LBA-Change 00:25:08.349 Dataset Management (09h): Supported 00:25:08.349 00:25:08.349 Error Log 00:25:08.349 ========= 00:25:08.349 Entry: 0 00:25:08.349 Error Count: 0x3 00:25:08.349 Submission Queue Id: 0x0 00:25:08.349 Command Id: 0x5 00:25:08.349 Phase Bit: 0 00:25:08.349 Status Code: 0x2 00:25:08.349 Status Code Type: 0x0 00:25:08.349 Do Not Retry: 1 00:25:08.349 Error Location: 0x28 00:25:08.349 LBA: 0x0 00:25:08.349 Namespace: 0x0 00:25:08.349 Vendor Log Page: 0x0 00:25:08.349 ----------- 00:25:08.349 Entry: 1 00:25:08.349 Error Count: 0x2 00:25:08.349 Submission Queue Id: 0x0 00:25:08.349 Command Id: 0x5 00:25:08.349 Phase Bit: 0 00:25:08.349 Status Code: 0x2 00:25:08.349 Status Code Type: 0x0 00:25:08.349 Do Not Retry: 1 00:25:08.349 Error Location: 0x28 00:25:08.349 LBA: 0x0 00:25:08.349 Namespace: 0x0 00:25:08.349 Vendor Log Page: 0x0 00:25:08.349 ----------- 00:25:08.349 Entry: 2 00:25:08.349 Error Count: 0x1 00:25:08.349 Submission Queue Id: 0x0 00:25:08.349 Command Id: 0x4 00:25:08.349 Phase Bit: 0 00:25:08.349 Status Code: 0x2 00:25:08.349 Status Code Type: 0x0 00:25:08.349 Do Not Retry: 1 00:25:08.349 Error Location: 0x28 00:25:08.349 LBA: 0x0 00:25:08.349 Namespace: 0x0 00:25:08.349 Vendor Log Page: 0x0 00:25:08.349 00:25:08.349 Number of Queues 00:25:08.349 ================ 00:25:08.349 Number of I/O Submission Queues: 128 00:25:08.349 Number of I/O Completion Queues: 128 00:25:08.349 00:25:08.349 ZNS Specific Controller Data 00:25:08.349 ============================ 00:25:08.349 Zone Append Size Limit: 0 00:25:08.349 00:25:08.349 00:25:08.349 Active Namespaces 00:25:08.349 ================= 00:25:08.349 get_feature(0x05) failed 00:25:08.349 Namespace ID:1 00:25:08.349 Command Set Identifier: NVM (00h) 00:25:08.349 Deallocate: Supported 00:25:08.349 Deallocated/Unwritten Error: Not Supported 00:25:08.349 Deallocated Read Value: Unknown 00:25:08.349 Deallocate in Write Zeroes: Not Supported 00:25:08.349 Deallocated Guard Field: 0xFFFF 00:25:08.349 Flush: Supported 00:25:08.349 Reservation: Not Supported 00:25:08.349 Namespace Sharing Capabilities: Multiple Controllers 00:25:08.349 Size (in LBAs): 1953525168 (931GiB) 00:25:08.349 Capacity (in LBAs): 1953525168 (931GiB) 00:25:08.349 Utilization (in LBAs): 1953525168 (931GiB) 00:25:08.349 UUID: 4228d58c-7125-47b3-b734-afd886747141 00:25:08.349 Thin Provisioning: Not Supported 00:25:08.349 Per-NS Atomic Units: Yes 00:25:08.349 Atomic Boundary Size (Normal): 0 00:25:08.349 Atomic Boundary Size (PFail): 0 00:25:08.349 Atomic Boundary Offset: 0 00:25:08.349 NGUID/EUI64 Never Reused: No 00:25:08.349 ANA group ID: 1 00:25:08.349 Namespace Write Protected: No 00:25:08.349 Number of LBA Formats: 1 00:25:08.349 Current LBA Format: LBA Format #00 00:25:08.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:08.349 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.349 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.349 rmmod nvme_tcp 00:25:08.609 rmmod nvme_fabrics 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.609 16:18:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:10.512 16:18:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:13.800 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:13.800 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:13.800 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:13.801 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:14.368 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:14.368 00:25:14.368 real 0m16.637s 00:25:14.368 user 0m4.409s 00:25:14.368 sys 0m8.657s 00:25:14.368 16:18:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.368 16:18:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.368 ************************************ 00:25:14.368 END TEST nvmf_identify_kernel_target 00:25:14.368 ************************************ 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.628 ************************************ 00:25:14.628 START TEST nvmf_auth_host 00:25:14.628 ************************************ 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:14.628 * Looking for test storage... 00:25:14.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:14.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.628 --rc genhtml_branch_coverage=1 00:25:14.628 --rc genhtml_function_coverage=1 00:25:14.628 --rc genhtml_legend=1 00:25:14.628 --rc geninfo_all_blocks=1 00:25:14.628 --rc geninfo_unexecuted_blocks=1 00:25:14.628 00:25:14.628 ' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:14.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.628 --rc genhtml_branch_coverage=1 00:25:14.628 --rc genhtml_function_coverage=1 00:25:14.628 --rc genhtml_legend=1 00:25:14.628 --rc geninfo_all_blocks=1 00:25:14.628 --rc geninfo_unexecuted_blocks=1 00:25:14.628 00:25:14.628 ' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:14.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.628 --rc genhtml_branch_coverage=1 00:25:14.628 --rc genhtml_function_coverage=1 00:25:14.628 --rc genhtml_legend=1 00:25:14.628 --rc geninfo_all_blocks=1 00:25:14.628 --rc geninfo_unexecuted_blocks=1 00:25:14.628 00:25:14.628 ' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:14.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.628 --rc genhtml_branch_coverage=1 00:25:14.628 --rc genhtml_function_coverage=1 00:25:14.628 --rc genhtml_legend=1 00:25:14.628 --rc geninfo_all_blocks=1 00:25:14.628 --rc geninfo_unexecuted_blocks=1 00:25:14.628 00:25:14.628 ' 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.628 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.629 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.888 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.888 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.888 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.888 16:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.458 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:21.459 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:21.459 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:21.459 Found net devices under 0000:86:00.0: cvl_0_0 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:21.459 Found net devices under 0000:86:00.1: cvl_0_1 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:25:21.459 00:25:21.459 --- 10.0.0.2 ping statistics --- 00:25:21.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.459 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:25:21.459 00:25:21.459 --- 10.0.0.1 ping statistics --- 00:25:21.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.459 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2866801 00:25:21.459 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2866801 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2866801 ']' 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8217a4ec3d52672a68c31aaa48b14a5c 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PiW 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8217a4ec3d52672a68c31aaa48b14a5c 0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8217a4ec3d52672a68c31aaa48b14a5c 0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8217a4ec3d52672a68c31aaa48b14a5c 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PiW 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PiW 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PiW 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50c6fb311dae2ecb47798c156d1c345a28ba2c9cd06c6d4898cebf7131c80f51 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YBQ 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50c6fb311dae2ecb47798c156d1c345a28ba2c9cd06c6d4898cebf7131c80f51 3 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50c6fb311dae2ecb47798c156d1c345a28ba2c9cd06c6d4898cebf7131c80f51 3 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50c6fb311dae2ecb47798c156d1c345a28ba2c9cd06c6d4898cebf7131c80f51 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YBQ 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YBQ 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YBQ 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=09efb79683b32f618f1ab511aa193c06b55e8e9b222b7744 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xwb 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 09efb79683b32f618f1ab511aa193c06b55e8e9b222b7744 0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 09efb79683b32f618f1ab511aa193c06b55e8e9b222b7744 0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=09efb79683b32f618f1ab511aa193c06b55e8e9b222b7744 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xwb 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xwb 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Xwb 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=372b61bd851c99039e85c1ccd2bc5f23dbf19aeddb292488 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ReS 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 372b61bd851c99039e85c1ccd2bc5f23dbf19aeddb292488 2 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 372b61bd851c99039e85c1ccd2bc5f23dbf19aeddb292488 2 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=372b61bd851c99039e85c1ccd2bc5f23dbf19aeddb292488 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ReS 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ReS 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ReS 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bfbe14bdb2a8f663aa742a8898fda37e 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.460 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pC7 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bfbe14bdb2a8f663aa742a8898fda37e 1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bfbe14bdb2a8f663aa742a8898fda37e 1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bfbe14bdb2a8f663aa742a8898fda37e 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pC7 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pC7 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pC7 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c526957cd2313afa815a2a8249660dd8 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2HB 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c526957cd2313afa815a2a8249660dd8 1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c526957cd2313afa815a2a8249660dd8 1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c526957cd2313afa815a2a8249660dd8 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:21.461 16:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2HB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2HB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2HB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13f462b69f582a463f62d192f8705a8925cec9693d453be1 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3yd 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13f462b69f582a463f62d192f8705a8925cec9693d453be1 2 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13f462b69f582a463f62d192f8705a8925cec9693d453be1 2 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13f462b69f582a463f62d192f8705a8925cec9693d453be1 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3yd 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3yd 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3yd 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad357fa9ba51c85c8fd10dc81f69a52a 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xXo 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad357fa9ba51c85c8fd10dc81f69a52a 0 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad357fa9ba51c85c8fd10dc81f69a52a 0 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad357fa9ba51c85c8fd10dc81f69a52a 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xXo 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xXo 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xXo 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3ada8c037e2355c8d693e3b40b38c3410ff6ba1f6f9af3a5806b7697e07a2906 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cjB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3ada8c037e2355c8d693e3b40b38c3410ff6ba1f6f9af3a5806b7697e07a2906 3 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3ada8c037e2355c8d693e3b40b38c3410ff6ba1f6f9af3a5806b7697e07a2906 3 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3ada8c037e2355c8d693e3b40b38c3410ff6ba1f6f9af3a5806b7697e07a2906 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cjB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cjB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cjB 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2866801 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2866801 ']' 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.461 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PiW 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YBQ ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YBQ 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Xwb 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ReS ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ReS 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pC7 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2HB ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2HB 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3yd 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xXo ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xXo 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cjB 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:21.721 16:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:25.007 Waiting for block devices as requested 00:25:25.007 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:25.007 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:25.007 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:25.265 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:25.265 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:25.265 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:25.523 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:25.523 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:25.523 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:25.523 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:25.781 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:26.348 16:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:26.348 No valid GPT data, bailing 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:26.348 00:25:26.348 Discovery Log Number of Records 2, Generation counter 2 00:25:26.348 =====Discovery Log Entry 0====== 00:25:26.348 trtype: tcp 00:25:26.348 adrfam: ipv4 00:25:26.348 subtype: current discovery subsystem 00:25:26.348 treq: not specified, sq flow control disable supported 00:25:26.348 portid: 1 00:25:26.348 trsvcid: 4420 00:25:26.348 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:26.348 traddr: 10.0.0.1 00:25:26.348 eflags: none 00:25:26.348 sectype: none 00:25:26.348 =====Discovery Log Entry 1====== 00:25:26.348 trtype: tcp 00:25:26.348 adrfam: ipv4 00:25:26.348 subtype: nvme subsystem 00:25:26.348 treq: not specified, sq flow control disable supported 00:25:26.348 portid: 1 00:25:26.348 trsvcid: 4420 00:25:26.348 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:26.348 traddr: 10.0.0.1 00:25:26.348 eflags: none 00:25:26.348 sectype: none 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.348 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.607 nvme0n1 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.607 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.608 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.867 nvme0n1 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.867 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.868 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.127 nvme0n1 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.127 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.128 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.128 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.128 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.128 16:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.386 nvme0n1 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.386 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.387 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.387 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.387 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.387 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.387 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 nvme0n1 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 nvme0n1 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.905 nvme0n1 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.905 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.164 nvme0n1 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.164 16:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:28.423 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.424 nvme0n1 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.424 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.682 nvme0n1 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.682 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.940 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.940 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.940 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.940 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.941 nvme0n1 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.941 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.199 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.199 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.199 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.200 16:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.458 nvme0n1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.459 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.718 nvme0n1 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.718 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.977 nvme0n1 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.977 16:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.235 nvme0n1 00:25:30.235 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.235 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.235 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.235 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.235 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.236 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.236 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.236 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.236 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.236 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.494 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.753 nvme0n1 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.753 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.012 nvme0n1 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.012 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.270 16:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.528 nvme0n1 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.528 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.529 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.096 nvme0n1 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.096 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.097 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.097 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.097 16:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.355 nvme0n1 00:25:32.355 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.355 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.355 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.355 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.355 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.355 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.614 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.873 nvme0n1 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.873 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.874 16:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.440 nvme0n1 00:25:33.440 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.440 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.440 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.440 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.440 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.699 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.265 nvme0n1 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.265 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.266 16:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.832 nvme0n1 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.832 16:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.429 nvme0n1 00:25:35.429 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.429 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.429 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.429 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.429 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.429 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.789 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.790 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.106 nvme0n1 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.106 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.392 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.393 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.393 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.393 16:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.393 nvme0n1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.393 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.651 nvme0n1 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.651 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.911 nvme0n1 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.911 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.171 nvme0n1 00:25:37.171 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.171 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.171 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.172 16:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.431 nvme0n1 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.431 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.432 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.690 nvme0n1 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.690 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.691 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.949 nvme0n1 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.949 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.950 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.950 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.950 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.950 nvme0n1 00:25:37.950 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.208 16:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.208 nvme0n1 00:25:38.208 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.208 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.208 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.208 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.208 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 nvme0n1 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.737 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.995 nvme0n1 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:38.995 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.996 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.259 nvme0n1 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.259 16:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.259 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.517 nvme0n1 00:25:39.517 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.517 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.517 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.517 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.517 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.517 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.518 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.776 nvme0n1 00:25:39.776 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.776 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.776 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.776 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.776 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.776 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.034 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.293 nvme0n1 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.293 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.294 16:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.553 nvme0n1 00:25:40.553 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.553 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.553 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.553 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.553 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.553 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.812 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.070 nvme0n1 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:41.070 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.071 16:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.637 nvme0n1 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.637 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.896 nvme0n1 00:25:41.896 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.896 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.896 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.896 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.896 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.896 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.155 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.156 16:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.415 nvme0n1 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.415 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.981 nvme0n1 00:25:42.981 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.981 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.981 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.981 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.981 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.981 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.240 16:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.807 nvme0n1 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:43.807 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.808 16:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.376 nvme0n1 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.376 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.943 nvme0n1 00:25:44.944 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.944 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.944 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.944 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.944 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.944 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.203 16:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.769 nvme0n1 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.769 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.770 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.770 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.028 nvme0n1 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.028 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.029 nvme0n1 00:25:46.029 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.288 16:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.288 nvme0n1 00:25:46.288 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.288 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.288 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.288 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.288 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.288 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.547 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.548 nvme0n1 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.548 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.806 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.807 nvme0n1 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.807 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.065 nvme0n1 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.065 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.066 16:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.324 nvme0n1 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.324 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.325 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.583 nvme0n1 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.583 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.584 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.842 nvme0n1 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.842 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.843 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.102 nvme0n1 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.102 16:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.361 nvme0n1 00:25:48.361 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.361 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.361 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.361 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.361 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.361 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.618 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.876 nvme0n1 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.876 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.877 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.135 nvme0n1 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:49.135 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.136 16:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.394 nvme0n1 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.394 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.656 nvme0n1 00:25:49.656 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.656 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.656 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.656 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.656 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.917 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.918 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.176 nvme0n1 00:25:50.176 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.176 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.176 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.176 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.176 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.176 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.177 16:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.177 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.744 nvme0n1 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.744 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.745 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.311 nvme0n1 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.311 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.312 16:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.570 nvme0n1 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.570 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.138 nvme0n1 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODIxN2E0ZWMzZDUyNjcyYTY4YzMxYWFhNDhiMTRhNWMJhrrL: 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: ]] 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBjNmZiMzExZGFlMmVjYjQ3Nzk4YzE1NmQxYzM0NWEyOGJhMmM5Y2QwNmM2ZDQ4OThjZWJmNzEzMWM4MGY1MSOH3o8=: 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.138 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.139 16:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.706 nvme0n1 00:25:52.706 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.706 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.706 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.706 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.706 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.706 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.707 16:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.275 nvme0n1 00:25:53.275 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.275 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.275 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.275 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.275 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.275 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:53.533 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.534 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.101 nvme0n1 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmNDYyYjY5ZjU4MmE0NjNmNjJkMTkyZjg3MDVhODkyNWNlYzk2OTNkNDUzYmUx9W6Bxg==: 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWQzNTdmYTliYTUxYzg1YzhmZDEwZGM4MWY2OWE1MmHA3/mD: 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.101 16:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.668 nvme0n1 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2FkYThjMDM3ZTIzNTVjOGQ2OTNlM2I0MGIzOGMzNDEwZmY2YmExZjZmOWFmM2E1ODA2Yjc2OTdlMDdhMjkwNh7hMxs=: 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:54.668 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.669 16:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.604 nvme0n1 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.604 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.604 request: 00:25:55.604 { 00:25:55.604 "name": "nvme0", 00:25:55.604 "trtype": "tcp", 00:25:55.604 "traddr": "10.0.0.1", 00:25:55.604 "adrfam": "ipv4", 00:25:55.604 "trsvcid": "4420", 00:25:55.604 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:55.604 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:55.604 "prchk_reftag": false, 00:25:55.604 "prchk_guard": false, 00:25:55.604 "hdgst": false, 00:25:55.604 "ddgst": false, 00:25:55.605 "allow_unrecognized_csi": false, 00:25:55.605 "method": "bdev_nvme_attach_controller", 00:25:55.605 "req_id": 1 00:25:55.605 } 00:25:55.605 Got JSON-RPC error response 00:25:55.605 response: 00:25:55.605 { 00:25:55.605 "code": -5, 00:25:55.605 "message": "Input/output error" 00:25:55.605 } 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.605 request: 00:25:55.605 { 00:25:55.605 "name": "nvme0", 00:25:55.605 "trtype": "tcp", 00:25:55.605 "traddr": "10.0.0.1", 00:25:55.605 "adrfam": "ipv4", 00:25:55.605 "trsvcid": "4420", 00:25:55.605 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:55.605 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:55.605 "prchk_reftag": false, 00:25:55.605 "prchk_guard": false, 00:25:55.605 "hdgst": false, 00:25:55.605 "ddgst": false, 00:25:55.605 "dhchap_key": "key2", 00:25:55.605 "allow_unrecognized_csi": false, 00:25:55.605 "method": "bdev_nvme_attach_controller", 00:25:55.605 "req_id": 1 00:25:55.605 } 00:25:55.605 Got JSON-RPC error response 00:25:55.605 response: 00:25:55.605 { 00:25:55.605 "code": -5, 00:25:55.605 "message": "Input/output error" 00:25:55.605 } 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.605 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.864 request: 00:25:55.864 { 00:25:55.864 "name": "nvme0", 00:25:55.864 "trtype": "tcp", 00:25:55.864 "traddr": "10.0.0.1", 00:25:55.864 "adrfam": "ipv4", 00:25:55.864 "trsvcid": "4420", 00:25:55.864 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:55.864 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:55.864 "prchk_reftag": false, 00:25:55.864 "prchk_guard": false, 00:25:55.864 "hdgst": false, 00:25:55.864 "ddgst": false, 00:25:55.864 "dhchap_key": "key1", 00:25:55.864 "dhchap_ctrlr_key": "ckey2", 00:25:55.864 "allow_unrecognized_csi": false, 00:25:55.864 "method": "bdev_nvme_attach_controller", 00:25:55.864 "req_id": 1 00:25:55.864 } 00:25:55.864 Got JSON-RPC error response 00:25:55.864 response: 00:25:55.864 { 00:25:55.864 "code": -5, 00:25:55.864 "message": "Input/output error" 00:25:55.864 } 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.864 nvme0n1 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.864 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.122 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.122 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.123 request: 00:25:56.123 { 00:25:56.123 "name": "nvme0", 00:25:56.123 "dhchap_key": "key1", 00:25:56.123 "dhchap_ctrlr_key": "ckey2", 00:25:56.123 "method": "bdev_nvme_set_keys", 00:25:56.123 "req_id": 1 00:25:56.123 } 00:25:56.123 Got JSON-RPC error response 00:25:56.123 response: 00:25:56.123 { 00:25:56.123 "code": -13, 00:25:56.123 "message": "Permission denied" 00:25:56.123 } 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:56.123 16:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:57.059 16:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllZmI3OTY4M2IzMmY2MThmMWFiNTExYWExOTNjMDZiNTVlOGU5YjIyMmI3NzQ0Pw8v+g==: 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: ]] 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzcyYjYxYmQ4NTFjOTkwMzllODVjMWNjZDJiYzVmMjNkYmYxOWFlZGRiMjkyNDg4HM+xag==: 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.436 16:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.436 nvme0n1 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZiZTE0YmRiMmE4ZjY2M2FhNzQyYTg4OThmZGEzN2ULkF5a: 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: ]] 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUyNjk1N2NkMjMxM2FmYTgxNWEyYTgyNDk2NjBkZDjnpqLb: 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.436 request: 00:25:58.436 { 00:25:58.436 "name": "nvme0", 00:25:58.436 "dhchap_key": "key2", 00:25:58.436 "dhchap_ctrlr_key": "ckey1", 00:25:58.436 "method": "bdev_nvme_set_keys", 00:25:58.436 "req_id": 1 00:25:58.436 } 00:25:58.436 Got JSON-RPC error response 00:25:58.436 response: 00:25:58.436 { 00:25:58.436 "code": -13, 00:25:58.436 "message": "Permission denied" 00:25:58.436 } 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:58.436 16:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.812 rmmod nvme_tcp 00:25:59.812 rmmod nvme_fabrics 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2866801 ']' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2866801 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2866801 ']' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2866801 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2866801 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2866801' 00:25:59.812 killing process with pid 2866801 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2866801 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2866801 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.812 16:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:02.348 16:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:04.885 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:04.885 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:05.821 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:05.821 16:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PiW /tmp/spdk.key-null.Xwb /tmp/spdk.key-sha256.pC7 /tmp/spdk.key-sha384.3yd /tmp/spdk.key-sha512.cjB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:05.821 16:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:09.113 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:09.113 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:09.113 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:09.113 00:26:09.113 real 0m54.141s 00:26:09.113 user 0m48.964s 00:26:09.113 sys 0m12.638s 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.113 ************************************ 00:26:09.113 END TEST nvmf_auth_host 00:26:09.113 ************************************ 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.113 ************************************ 00:26:09.113 START TEST nvmf_digest 00:26:09.113 ************************************ 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:09.113 * Looking for test storage... 00:26:09.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.113 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.114 --rc genhtml_branch_coverage=1 00:26:09.114 --rc genhtml_function_coverage=1 00:26:09.114 --rc genhtml_legend=1 00:26:09.114 --rc geninfo_all_blocks=1 00:26:09.114 --rc geninfo_unexecuted_blocks=1 00:26:09.114 00:26:09.114 ' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.114 --rc genhtml_branch_coverage=1 00:26:09.114 --rc genhtml_function_coverage=1 00:26:09.114 --rc genhtml_legend=1 00:26:09.114 --rc geninfo_all_blocks=1 00:26:09.114 --rc geninfo_unexecuted_blocks=1 00:26:09.114 00:26:09.114 ' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.114 --rc genhtml_branch_coverage=1 00:26:09.114 --rc genhtml_function_coverage=1 00:26:09.114 --rc genhtml_legend=1 00:26:09.114 --rc geninfo_all_blocks=1 00:26:09.114 --rc geninfo_unexecuted_blocks=1 00:26:09.114 00:26:09.114 ' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.114 --rc genhtml_branch_coverage=1 00:26:09.114 --rc genhtml_function_coverage=1 00:26:09.114 --rc genhtml_legend=1 00:26:09.114 --rc geninfo_all_blocks=1 00:26:09.114 --rc geninfo_unexecuted_blocks=1 00:26:09.114 00:26:09.114 ' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.114 16:19:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:15.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:15.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:15.686 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:15.687 Found net devices under 0000:86:00.0: cvl_0_0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:15.687 Found net devices under 0000:86:00.1: cvl_0_1 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:15.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:26:15.687 00:26:15.687 --- 10.0.0.2 ping statistics --- 00:26:15.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.687 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:26:15.687 00:26:15.687 --- 10.0.0.1 ping statistics --- 00:26:15.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.687 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.687 ************************************ 00:26:15.687 START TEST nvmf_digest_clean 00:26:15.687 ************************************ 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2880566 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2880566 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2880566 ']' 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.687 [2024-11-20 16:19:15.633623] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:15.687 [2024-11-20 16:19:15.633670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.687 [2024-11-20 16:19:15.714620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.687 [2024-11-20 16:19:15.755677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.687 [2024-11-20 16:19:15.755711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.687 [2024-11-20 16:19:15.755718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.687 [2024-11-20 16:19:15.755724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.687 [2024-11-20 16:19:15.755729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.687 [2024-11-20 16:19:15.756285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.687 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.688 null0 00:26:15.688 [2024-11-20 16:19:15.909433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.688 [2024-11-20 16:19:15.933629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2880586 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2880586 /var/tmp/bperf.sock 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2880586 ']' 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.688 16:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.688 [2024-11-20 16:19:15.984645] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:15.688 [2024-11-20 16:19:15.984686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880586 ] 00:26:15.688 [2024-11-20 16:19:16.058012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.688 [2024-11-20 16:19:16.098801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.688 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.253 nvme0n1 00:26:16.253 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:16.253 16:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.253 Running I/O for 2 seconds... 00:26:18.558 24592.00 IOPS, 96.06 MiB/s [2024-11-20T15:19:19.395Z] 24536.00 IOPS, 95.84 MiB/s 00:26:18.558 Latency(us) 00:26:18.558 [2024-11-20T15:19:19.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:18.558 nvme0n1 : 2.00 24552.28 95.91 0.00 0.00 5209.27 2379.24 12081.42 00:26:18.558 [2024-11-20T15:19:19.395Z] =================================================================================================================== 00:26:18.558 [2024-11-20T15:19:19.395Z] Total : 24552.28 95.91 0.00 0.00 5209.27 2379.24 12081.42 00:26:18.558 { 00:26:18.558 "results": [ 00:26:18.558 { 00:26:18.558 "job": "nvme0n1", 00:26:18.558 "core_mask": "0x2", 00:26:18.558 "workload": "randread", 00:26:18.558 "status": "finished", 00:26:18.558 "queue_depth": 128, 00:26:18.558 "io_size": 4096, 00:26:18.558 "runtime": 2.003887, 00:26:18.558 "iops": 24552.282638691704, 00:26:18.558 "mibps": 95.90735405738947, 00:26:18.558 "io_failed": 0, 00:26:18.558 "io_timeout": 0, 00:26:18.558 "avg_latency_us": 5209.273977801344, 00:26:18.558 "min_latency_us": 2379.241739130435, 00:26:18.558 "max_latency_us": 12081.419130434782 00:26:18.558 } 00:26:18.558 ], 00:26:18.558 "core_count": 1 00:26:18.558 } 00:26:18.558 16:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:18.558 16:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:18.558 16:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:18.558 16:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:18.558 | select(.opcode=="crc32c") 00:26:18.558 | "\(.module_name) \(.executed)"' 00:26:18.558 16:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2880586 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2880586 ']' 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2880586 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.558 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880586 00:26:18.559 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:18.559 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:18.559 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880586' 00:26:18.559 killing process with pid 2880586 00:26:18.559 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2880586 00:26:18.559 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.559 00:26:18.559 Latency(us) 00:26:18.559 [2024-11-20T15:19:19.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.559 [2024-11-20T15:19:19.396Z] =================================================================================================================== 00:26:18.559 [2024-11-20T15:19:19.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.559 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2880586 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2881060 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2881060 /var/tmp/bperf.sock 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2881060 ']' 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.817 [2024-11-20 16:19:19.449510] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:18.817 [2024-11-20 16:19:19.449559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881060 ] 00:26:18.817 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.817 Zero copy mechanism will not be used. 00:26:18.817 [2024-11-20 16:19:19.526012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.817 [2024-11-20 16:19:19.568692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.817 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.076 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.076 16:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.641 nvme0n1 00:26:19.641 16:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.641 16:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.641 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.641 Zero copy mechanism will not be used. 00:26:19.641 Running I/O for 2 seconds... 00:26:21.509 5444.00 IOPS, 680.50 MiB/s [2024-11-20T15:19:22.346Z] 5452.00 IOPS, 681.50 MiB/s 00:26:21.509 Latency(us) 00:26:21.509 [2024-11-20T15:19:22.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.509 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:21.509 nvme0n1 : 2.00 5453.10 681.64 0.00 0.00 2931.60 673.17 9459.98 00:26:21.509 [2024-11-20T15:19:22.346Z] =================================================================================================================== 00:26:21.509 [2024-11-20T15:19:22.346Z] Total : 5453.10 681.64 0.00 0.00 2931.60 673.17 9459.98 00:26:21.509 { 00:26:21.509 "results": [ 00:26:21.509 { 00:26:21.509 "job": "nvme0n1", 00:26:21.509 "core_mask": "0x2", 00:26:21.509 "workload": "randread", 00:26:21.509 "status": "finished", 00:26:21.509 "queue_depth": 16, 00:26:21.509 "io_size": 131072, 00:26:21.509 "runtime": 2.00253, 00:26:21.509 "iops": 5453.10182618987, 00:26:21.509 "mibps": 681.6377282737337, 00:26:21.509 "io_failed": 0, 00:26:21.509 "io_timeout": 0, 00:26:21.509 "avg_latency_us": 2931.596210861602, 00:26:21.509 "min_latency_us": 673.1686956521739, 00:26:21.509 "max_latency_us": 9459.979130434782 00:26:21.509 } 00:26:21.509 ], 00:26:21.509 "core_count": 1 00:26:21.509 } 00:26:21.509 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.509 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.509 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.509 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.509 | select(.opcode=="crc32c") 00:26:21.509 | "\(.module_name) \(.executed)"' 00:26:21.509 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2881060 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2881060 ']' 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2881060 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.767 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2881060 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2881060' 00:26:22.025 killing process with pid 2881060 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2881060 00:26:22.025 Received shutdown signal, test time was about 2.000000 seconds 00:26:22.025 00:26:22.025 Latency(us) 00:26:22.025 [2024-11-20T15:19:22.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.025 [2024-11-20T15:19:22.862Z] =================================================================================================================== 00:26:22.025 [2024-11-20T15:19:22.862Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2881060 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:22.025 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2881750 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2881750 /var/tmp/bperf.sock 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2881750 ']' 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.026 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:22.026 [2024-11-20 16:19:22.815960] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:22.026 [2024-11-20 16:19:22.816008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881750 ] 00:26:22.284 [2024-11-20 16:19:22.891959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.284 [2024-11-20 16:19:22.934771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.284 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.284 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:22.284 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:22.284 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:22.284 16:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.542 16:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.542 16:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.799 nvme0n1 00:26:23.057 16:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:23.057 16:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.057 Running I/O for 2 seconds... 00:26:24.923 27575.00 IOPS, 107.71 MiB/s [2024-11-20T15:19:25.760Z] 27636.50 IOPS, 107.96 MiB/s 00:26:24.923 Latency(us) 00:26:24.923 [2024-11-20T15:19:25.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:24.923 nvme0n1 : 2.01 27659.83 108.05 0.00 0.00 4621.98 2279.51 9118.05 00:26:24.923 [2024-11-20T15:19:25.760Z] =================================================================================================================== 00:26:24.923 [2024-11-20T15:19:25.760Z] Total : 27659.83 108.05 0.00 0.00 4621.98 2279.51 9118.05 00:26:24.923 { 00:26:24.923 "results": [ 00:26:24.923 { 00:26:24.923 "job": "nvme0n1", 00:26:24.923 "core_mask": "0x2", 00:26:24.923 "workload": "randwrite", 00:26:24.923 "status": "finished", 00:26:24.923 "queue_depth": 128, 00:26:24.923 "io_size": 4096, 00:26:24.923 "runtime": 2.005833, 00:26:24.923 "iops": 27659.830105497316, 00:26:24.923 "mibps": 108.04621134959889, 00:26:24.923 "io_failed": 0, 00:26:24.923 "io_timeout": 0, 00:26:24.923 "avg_latency_us": 4621.98106670282, 00:26:24.923 "min_latency_us": 2279.513043478261, 00:26:24.923 "max_latency_us": 9118.052173913044 00:26:24.923 } 00:26:24.923 ], 00:26:24.923 "core_count": 1 00:26:24.923 } 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:25.181 | select(.opcode=="crc32c") 00:26:25.181 | "\(.module_name) \(.executed)"' 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2881750 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2881750 ']' 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2881750 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.181 16:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2881750 00:26:25.181 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:25.181 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:25.181 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2881750' 00:26:25.181 killing process with pid 2881750 00:26:25.181 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2881750 00:26:25.181 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.181 00:26:25.181 Latency(us) 00:26:25.181 [2024-11-20T15:19:26.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.181 [2024-11-20T15:19:26.018Z] =================================================================================================================== 00:26:25.181 [2024-11-20T15:19:26.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.181 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2881750 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2882226 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2882226 /var/tmp/bperf.sock 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2882226 ']' 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.439 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.439 [2024-11-20 16:19:26.219853] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:25.440 [2024-11-20 16:19:26.219906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882226 ] 00:26:25.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.440 Zero copy mechanism will not be used. 00:26:25.698 [2024-11-20 16:19:26.293301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.698 [2024-11-20 16:19:26.335942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.698 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.698 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:25.698 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:25.698 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:25.698 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:25.956 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.956 16:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.214 nvme0n1 00:26:26.472 16:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:26.472 16:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.472 Zero copy mechanism will not be used. 00:26:26.472 Running I/O for 2 seconds... 00:26:28.339 6336.00 IOPS, 792.00 MiB/s [2024-11-20T15:19:29.176Z] 6520.50 IOPS, 815.06 MiB/s 00:26:28.339 Latency(us) 00:26:28.339 [2024-11-20T15:19:29.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.339 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:28.339 nvme0n1 : 2.00 6517.57 814.70 0.00 0.00 2450.70 1759.50 4957.94 00:26:28.339 [2024-11-20T15:19:29.176Z] =================================================================================================================== 00:26:28.339 [2024-11-20T15:19:29.176Z] Total : 6517.57 814.70 0.00 0.00 2450.70 1759.50 4957.94 00:26:28.339 { 00:26:28.339 "results": [ 00:26:28.339 { 00:26:28.339 "job": "nvme0n1", 00:26:28.339 "core_mask": "0x2", 00:26:28.339 "workload": "randwrite", 00:26:28.339 "status": "finished", 00:26:28.339 "queue_depth": 16, 00:26:28.339 "io_size": 131072, 00:26:28.339 "runtime": 2.003355, 00:26:28.339 "iops": 6517.566781723659, 00:26:28.339 "mibps": 814.6958477154574, 00:26:28.339 "io_failed": 0, 00:26:28.339 "io_timeout": 0, 00:26:28.339 "avg_latency_us": 2450.7004773051935, 00:26:28.339 "min_latency_us": 1759.4991304347825, 00:26:28.339 "max_latency_us": 4957.940869565217 00:26:28.339 } 00:26:28.339 ], 00:26:28.339 "core_count": 1 00:26:28.339 } 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:28.596 | select(.opcode=="crc32c") 00:26:28.596 | "\(.module_name) \(.executed)"' 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2882226 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2882226 ']' 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2882226 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.596 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882226 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882226' 00:26:28.854 killing process with pid 2882226 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2882226 00:26:28.854 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.854 00:26:28.854 Latency(us) 00:26:28.854 [2024-11-20T15:19:29.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.854 [2024-11-20T15:19:29.691Z] =================================================================================================================== 00:26:28.854 [2024-11-20T15:19:29.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2882226 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2880566 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2880566 ']' 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2880566 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880566 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880566' 00:26:28.854 killing process with pid 2880566 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2880566 00:26:28.854 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2880566 00:26:29.113 00:26:29.113 real 0m14.246s 00:26:29.113 user 0m27.304s 00:26:29.113 sys 0m4.583s 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.113 ************************************ 00:26:29.113 END TEST nvmf_digest_clean 00:26:29.113 ************************************ 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.113 ************************************ 00:26:29.113 START TEST nvmf_digest_error 00:26:29.113 ************************************ 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2882942 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2882942 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2882942 ']' 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.113 16:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.372 [2024-11-20 16:19:29.953061] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:29.372 [2024-11-20 16:19:29.953102] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.372 [2024-11-20 16:19:30.031926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.372 [2024-11-20 16:19:30.079991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.372 [2024-11-20 16:19:30.080029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.372 [2024-11-20 16:19:30.080037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.372 [2024-11-20 16:19:30.080043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.372 [2024-11-20 16:19:30.080049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.372 [2024-11-20 16:19:30.080587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.372 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.372 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:29.372 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.372 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.372 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.373 [2024-11-20 16:19:30.149025] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.373 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.631 null0 00:26:29.631 [2024-11-20 16:19:30.241830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.631 [2024-11-20 16:19:30.266041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2882962 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2882962 /var/tmp/bperf.sock 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2882962 ']' 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.631 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.631 [2024-11-20 16:19:30.317442] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:29.631 [2024-11-20 16:19:30.317484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882962 ] 00:26:29.631 [2024-11-20 16:19:30.392151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.631 [2024-11-20 16:19:30.435001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.889 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.889 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:29.889 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.889 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.889 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.889 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.146 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.146 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.146 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.146 16:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.404 nvme0n1 00:26:30.404 16:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:30.404 16:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.404 16:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.404 16:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.404 16:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:30.404 16:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.662 Running I/O for 2 seconds... 00:26:30.662 [2024-11-20 16:19:31.278670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.662 [2024-11-20 16:19:31.278703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.662 [2024-11-20 16:19:31.278714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.662 [2024-11-20 16:19:31.289591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.662 [2024-11-20 16:19:31.289614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.662 [2024-11-20 16:19:31.289623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.662 [2024-11-20 16:19:31.300110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.662 [2024-11-20 16:19:31.300132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.662 [2024-11-20 16:19:31.300142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.662 [2024-11-20 16:19:31.309150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.662 [2024-11-20 16:19:31.309172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.662 [2024-11-20 16:19:31.309181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.662 [2024-11-20 16:19:31.319187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.662 [2024-11-20 16:19:31.319207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.662 [2024-11-20 16:19:31.319216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.662 [2024-11-20 16:19:31.330159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.330180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.339332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.339352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.339360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.352000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.352021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.352033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.363521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.363542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.363551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.371913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.371932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.371940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.383425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.383452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.396073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.396093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.396101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.404410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.404430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.404440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.416223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.416242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.416251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.428258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.428279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.428287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.439378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.439398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.439407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.448571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.448595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.448604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.459786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.459807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.459815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.469726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.469746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.469754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.480783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.480803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.663 [2024-11-20 16:19:31.489959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.663 [2024-11-20 16:19:31.489980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.663 [2024-11-20 16:19:31.489988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.921 [2024-11-20 16:19:31.499988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.500012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.500022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.510427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.510450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.510459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.519853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.519873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.519881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.529431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.529451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.529459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.539310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.539331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.539339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.549106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.549126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.549134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.558880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.558900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.558908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.568533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.568553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.568560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.578178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.578198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.578206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.587842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.587862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.587870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.597450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.597470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.597477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.607167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.607186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.607193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.617555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.617579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.617587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.628527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.628548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.628556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.638670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.638690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.638698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.647421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.647440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.647448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.660598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.660619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.660628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.672217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.672245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.683672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.683692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.683699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.695091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.695111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.695119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.703931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.703956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.703964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.716492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.716512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.716520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.729435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.729456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.729464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.742538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.742558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.742566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.922 [2024-11-20 16:19:31.753063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:30.922 [2024-11-20 16:19:31.753085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.922 [2024-11-20 16:19:31.753094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.762761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.762784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.762793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.775322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.775343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.784847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.784868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.784876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.795294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.795315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.795323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.803883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.803903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.815715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.815736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.827153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.827174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.827183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.837446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.837466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.846123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.846143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.846151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.857648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.857668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.857676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.866242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.866263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.866271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.181 [2024-11-20 16:19:31.877688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.181 [2024-11-20 16:19:31.877708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.181 [2024-11-20 16:19:31.877716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.889807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.889828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.889835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.899090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.899117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.899126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.909373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.909394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.909402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.920613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.920633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.920641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.931220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.931240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.931248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.940611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.940631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.940639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.950473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.950493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.950501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.961409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.961429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.961438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.974784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.974813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.974821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.985652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.985672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.985680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:31.996926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:31.996946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:31.996960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.182 [2024-11-20 16:19:32.008035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.182 [2024-11-20 16:19:32.008055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.182 [2024-11-20 16:19:32.008063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.016954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.016978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.016988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.029363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.029386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.029395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.041230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.041251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.041259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.054522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.054543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.054551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.066193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.066224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.066232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.074467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.074487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.074495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.084883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.084902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.084914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.094933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.094958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.094967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.104332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.104352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.104359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.115052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.115072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.115080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.124886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.124906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.124915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.134059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.134079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.134087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.146563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.159523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.441 [2024-11-20 16:19:32.159543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.441 [2024-11-20 16:19:32.159551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.441 [2024-11-20 16:19:32.170513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.170534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.170541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.184092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.184112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.184120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.192487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.192507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.192515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.204314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.204334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.204342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.217470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.217490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.217498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.229338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.229358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.229366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.239120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.239139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.239148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.248481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.248500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.248508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 [2024-11-20 16:19:32.259325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.259345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.259354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.442 23796.00 IOPS, 92.95 MiB/s [2024-11-20T15:19:32.279Z] [2024-11-20 16:19:32.270513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.442 [2024-11-20 16:19:32.270533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.442 [2024-11-20 16:19:32.270545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.282161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.282184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.282194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.292349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.292370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.292379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.301540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.301561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.301570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.311054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.311075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.311084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.323004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.323025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.323033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.331910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.331931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.331940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.343537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.343558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.343566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.354562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.354582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.354590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.364090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.364114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.364122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.374735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.374757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.374765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.387806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.387829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.387838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.397354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.397375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.397382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.406926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.406946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.406962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.419156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.419175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.419183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.426993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.427021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.438271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.438291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.438299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.449686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.449706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.449714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.458322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.458342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.458350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.470026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.470063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.470071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.479412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.479432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.479440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.489320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.489346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.489354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.501245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.501266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.501274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.510029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.510049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.701 [2024-11-20 16:19:32.510057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.701 [2024-11-20 16:19:32.520012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.701 [2024-11-20 16:19:32.520034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.702 [2024-11-20 16:19:32.520042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.702 [2024-11-20 16:19:32.528516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.702 [2024-11-20 16:19:32.528537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.702 [2024-11-20 16:19:32.528545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.541381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.541409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.541419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.553027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.565903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.565925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.565933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.578220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.578240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.578249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.588560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.588581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.588588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.597891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.597911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.597919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.608070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.608090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.608098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.619304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.619325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.619333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.630456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.630476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.630484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.639035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.639055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.639063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.651212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.651232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.651240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.660873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.660893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.660901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.671305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.671326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.671334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.684113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.684135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.684143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.695361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.695382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.695390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.707827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.707847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.707855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.720524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.720543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.720552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.733596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.733617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.733629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.744977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.744997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.745005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.753571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.753591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.753599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.764548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.764567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.764575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.773028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.960 [2024-11-20 16:19:32.773048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.960 [2024-11-20 16:19:32.773055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.960 [2024-11-20 16:19:32.783206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:31.961 [2024-11-20 16:19:32.783225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.961 [2024-11-20 16:19:32.783233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.796865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.796889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.796898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.809510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.809532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.809541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.819659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.819679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.819687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.828992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.829016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.829024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.840776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.840797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.840805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.849077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.849097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.849105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.859352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.859372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.859379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.869647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.869666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.869674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.878130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.878149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.878157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.888055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.888075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.888083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.897762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.897781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.897788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.909701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.909722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.909730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.920935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.920963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.920971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.933967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.933989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.933997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.943276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.943296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.943304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.955886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.955905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.955913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.968413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.968442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.981298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.219 [2024-11-20 16:19:32.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.219 [2024-11-20 16:19:32.981326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.219 [2024-11-20 16:19:32.989882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.220 [2024-11-20 16:19:32.989902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.220 [2024-11-20 16:19:32.989910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.220 [2024-11-20 16:19:33.000335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.220 [2024-11-20 16:19:33.000356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.220 [2024-11-20 16:19:33.000364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.220 [2024-11-20 16:19:33.010766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.220 [2024-11-20 16:19:33.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.220 [2024-11-20 16:19:33.010798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.220 [2024-11-20 16:19:33.019100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.220 [2024-11-20 16:19:33.019119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.220 [2024-11-20 16:19:33.019127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.220 [2024-11-20 16:19:33.031047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.220 [2024-11-20 16:19:33.031067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.220 [2024-11-20 16:19:33.031075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.220 [2024-11-20 16:19:33.043569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.220 [2024-11-20 16:19:33.043590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.220 [2024-11-20 16:19:33.043598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.056605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.056629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.056638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.068295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.068317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.068326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.081534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.081555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.081563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.093677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.093698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.093706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.105148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.105167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.105175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.113978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.113998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.114006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.125909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.125930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.125938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.137070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.137091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.137099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.478 [2024-11-20 16:19:33.147971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.478 [2024-11-20 16:19:33.147992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.478 [2024-11-20 16:19:33.148000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.158804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.158825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.158833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.167410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.167430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.167438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.178207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.178226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.178234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.188916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.188937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.188945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.201103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.201123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.201135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.209092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.209112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.209120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.219681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.219701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.219709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.231263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.231283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.231291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.243073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.243093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.243101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.256911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.256932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.256940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 [2024-11-20 16:19:33.266559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518880) 00:26:32.479 [2024-11-20 16:19:33.266579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.479 [2024-11-20 16:19:33.266587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.479 23705.50 IOPS, 92.60 MiB/s 00:26:32.479 Latency(us) 00:26:32.479 [2024-11-20T15:19:33.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:32.479 nvme0n1 : 2.01 23708.59 92.61 0.00 0.00 5392.22 2678.43 18236.10 00:26:32.479 [2024-11-20T15:19:33.316Z] =================================================================================================================== 00:26:32.479 [2024-11-20T15:19:33.316Z] Total : 23708.59 92.61 0.00 0.00 5392.22 2678.43 18236.10 00:26:32.479 { 00:26:32.479 "results": [ 00:26:32.479 { 00:26:32.479 "job": "nvme0n1", 00:26:32.479 "core_mask": "0x2", 00:26:32.479 "workload": "randread", 00:26:32.479 "status": "finished", 00:26:32.479 "queue_depth": 128, 00:26:32.479 "io_size": 4096, 00:26:32.479 "runtime": 2.005982, 00:26:32.479 "iops": 23708.587614445194, 00:26:32.479 "mibps": 92.61167036892654, 00:26:32.479 "io_failed": 0, 00:26:32.479 "io_timeout": 0, 00:26:32.479 "avg_latency_us": 5392.224705057425, 00:26:32.479 "min_latency_us": 2678.4278260869564, 00:26:32.479 "max_latency_us": 18236.104347826087 00:26:32.479 } 00:26:32.479 ], 00:26:32.479 "core_count": 1 00:26:32.479 } 00:26:32.479 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:32.479 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:32.479 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:32.479 | .driver_specific 00:26:32.479 | .nvme_error 00:26:32.479 | .status_code 00:26:32.479 | .command_transient_transport_error' 00:26:32.479 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 186 > 0 )) 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2882962 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2882962 ']' 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2882962 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882962 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882962' 00:26:32.737 killing process with pid 2882962 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2882962 00:26:32.737 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.737 00:26:32.737 Latency(us) 00:26:32.737 [2024-11-20T15:19:33.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.737 [2024-11-20T15:19:33.574Z] =================================================================================================================== 00:26:32.737 [2024-11-20T15:19:33.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.737 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2882962 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2883443 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2883443 /var/tmp/bperf.sock 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2883443 ']' 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.995 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.995 [2024-11-20 16:19:33.758176] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:32.995 [2024-11-20 16:19:33.758236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883443 ] 00:26:32.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.995 Zero copy mechanism will not be used. 00:26:33.253 [2024-11-20 16:19:33.834059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.253 [2024-11-20 16:19:33.876714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.253 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.253 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:33.253 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:33.253 16:19:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:33.511 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:33.511 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.511 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.511 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.511 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.511 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.769 nvme0n1 00:26:33.769 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:33.769 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.769 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.769 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.769 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.769 16:19:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.769 Zero copy mechanism will not be used. 00:26:33.769 Running I/O for 2 seconds... 00:26:33.769 [2024-11-20 16:19:34.522466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.522500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.522510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.769 [2024-11-20 16:19:34.527759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.527788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.769 [2024-11-20 16:19:34.533036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.533059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.533067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.769 [2024-11-20 16:19:34.538343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.538364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.538373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.769 [2024-11-20 16:19:34.543734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.543756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.543765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.769 [2024-11-20 16:19:34.549088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.549111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.549119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.769 [2024-11-20 16:19:34.554395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.769 [2024-11-20 16:19:34.554416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.769 [2024-11-20 16:19:34.554425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.560132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.560155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.560163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.565650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.565672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.565681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.571223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.571245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.571253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.576562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.576585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.576594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.582130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.582152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.582161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.587610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.587631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.587639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.593087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.593108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.593117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.770 [2024-11-20 16:19:34.598495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:33.770 [2024-11-20 16:19:34.598517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.770 [2024-11-20 16:19:34.598525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.028 [2024-11-20 16:19:34.604103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.028 [2024-11-20 16:19:34.604128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.028 [2024-11-20 16:19:34.604138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.028 [2024-11-20 16:19:34.609554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.028 [2024-11-20 16:19:34.609578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.028 [2024-11-20 16:19:34.609587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.028 [2024-11-20 16:19:34.615084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.028 [2024-11-20 16:19:34.615106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.028 [2024-11-20 16:19:34.615114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.028 [2024-11-20 16:19:34.620621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.620644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.620656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.626131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.626152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.626160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.631541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.631562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.631570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.636931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.636957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.636966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.642310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.642331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.642340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.647856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.647878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.647885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.653379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.653402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.653410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.659051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.659074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.659082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.664699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.664721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.664729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.670199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.670221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.670230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.675929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.675958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.675967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.681433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.681455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.681464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.686779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.686800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.692064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.692085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.692093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.697410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.697432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.697441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.702743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.702764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.702773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.708087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.708108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.708116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.713424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.713446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.713460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.718746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.718767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.718775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.724053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.724074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.724081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.729613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.729634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.729643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.734983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.735005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.735013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.740370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.740391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.740399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.745766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.745796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.751234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.751256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.751265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.756478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.029 [2024-11-20 16:19:34.756499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.029 [2024-11-20 16:19:34.756507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.029 [2024-11-20 16:19:34.761651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.761676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.761684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.766937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.766965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.766973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.772200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.772220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.772228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.775088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.775109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.775117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.780618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.780640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.780649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.786163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.786185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.786194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.791636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.791658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.791666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.797111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.797132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.797140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.802526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.802546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.802554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.808018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.808039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.813502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.813524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.813532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.818968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.818989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.818997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.824439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.824460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.824468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.829875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.829897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.829904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.835755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.835777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.835785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.841376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.841398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.841406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.846838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.846859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.846867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.852302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.852323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.852334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.030 [2024-11-20 16:19:34.857766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.030 [2024-11-20 16:19:34.857787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.030 [2024-11-20 16:19:34.857795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.863265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.863294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.863304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.868762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.868786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.868795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.874179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.874200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.874209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.879607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.879628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.879636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.885048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.885069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.885077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.890586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.890607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.890615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.896115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.896136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.896145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.901608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.901634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.901642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.907071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.907091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.907100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.912593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.912614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.912622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.917930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.917960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.917969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.923295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.923316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.923325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.928782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.928809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.928817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.934237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.934258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.934265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.289 [2024-11-20 16:19:34.939746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.289 [2024-11-20 16:19:34.939767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.289 [2024-11-20 16:19:34.939775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.945112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.945133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.945142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.950673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.950695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.950702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.956159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.956180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.956188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.961573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.961594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.961603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.967013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.967034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.967042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.972601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.972623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.972631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.978133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.978156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.978166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.983656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.983678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.983685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.989036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.989057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.989065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:34.994417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:34.994438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:34.994451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.000122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.000143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.000152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.005672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.005693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.005702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.011055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.011076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.011084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.016573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.016602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.021924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.021945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.021959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.027491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.027511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.027520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.033003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.033023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.033031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.038584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.038605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.038615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.044069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.044093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.049541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.049562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.054851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.054872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.054880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.060287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.060308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.060317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.065763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.065785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.065793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.071336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.071357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.071365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.076751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.076772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.076780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.082685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.082705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.082713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.290 [2024-11-20 16:19:35.087996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.290 [2024-11-20 16:19:35.088017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.290 [2024-11-20 16:19:35.088025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.291 [2024-11-20 16:19:35.093370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.291 [2024-11-20 16:19:35.093391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-20 16:19:35.093399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.291 [2024-11-20 16:19:35.098872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.291 [2024-11-20 16:19:35.098893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-20 16:19:35.098901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.291 [2024-11-20 16:19:35.104350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.291 [2024-11-20 16:19:35.104372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-20 16:19:35.104379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.291 [2024-11-20 16:19:35.109860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.291 [2024-11-20 16:19:35.109881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-20 16:19:35.109888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.291 [2024-11-20 16:19:35.115524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.291 [2024-11-20 16:19:35.115545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-20 16:19:35.115553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.291 [2024-11-20 16:19:35.121050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.291 [2024-11-20 16:19:35.121073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.291 [2024-11-20 16:19:35.121082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.126494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.126518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.126527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.131899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.131923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.131932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.137862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.137884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.137895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.144821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.144842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.144850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.152295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.152317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.152326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.159345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.159366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.166788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.166810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.166819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.174800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.174822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.174831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.182818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.182841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.182850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.190679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.190702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.190711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.198507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.198531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.198541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.206753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.206780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.206789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.214493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.214516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.214525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.222025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.222048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.222057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.229309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.229331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.229339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.236711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.236734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.236743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.243711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.243734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.243743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.250969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.250993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.251001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.257150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.257172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.257180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.262793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.262815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.262823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.268551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.551 [2024-11-20 16:19:35.268573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.551 [2024-11-20 16:19:35.268581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.551 [2024-11-20 16:19:35.274144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.274166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.274174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.279308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.279330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.279338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.284758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.284780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.284790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.290458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.290481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.290489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.296094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.296117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.296125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.301674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.301696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.301705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.307518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.307540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.307549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.313141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.313164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.313176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.318614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.318636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.318644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.323944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.323971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.323979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.329268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.329290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.334599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.334620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.334629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.340285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.340307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.340315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.346158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.346179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.346187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.351803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.351825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.351833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.357488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.357509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.357517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.363021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.363046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.363054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.368726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.368747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.368755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.374291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.374312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.374320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.552 [2024-11-20 16:19:35.379854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.552 [2024-11-20 16:19:35.379876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.552 [2024-11-20 16:19:35.379884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.385550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.385575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.385585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.391054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.391078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.391087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.396644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.396667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.396676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.402170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.402192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.402201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.407871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.407894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.407903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.413545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.413567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.413575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.419027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.419049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.419057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.424597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.424618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.424626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.430138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.430161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.430169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.435866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.435887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.435895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.441740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.441762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.447222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.447244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.447253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.452651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.452672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.452681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.458098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.458120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.458135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.463671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.463692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.463701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.468857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.468879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.468888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.474496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.474518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.474527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.480251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.480273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.480281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.485819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.485841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.485849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.492527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.492549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.492558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.499283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.499305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.499313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.504915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.504937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.504945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.510488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.510510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.510519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.811 [2024-11-20 16:19:35.516166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.516188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-11-20 16:19:35.516196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.811 5451.00 IOPS, 681.38 MiB/s [2024-11-20T15:19:35.648Z] [2024-11-20 16:19:35.523146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.811 [2024-11-20 16:19:35.523167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.523175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.529135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.529156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.529164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.534839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.534861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.534869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.540518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.540540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.540548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.546248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.546270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.546279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.551862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.551884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.551893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.557555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.557577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.557589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.563122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.563143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.563152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.568775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.568796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.568805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.574390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.574411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.574420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.579871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.579893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.579903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.585546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.585569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.585577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.590846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.590868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.590876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.596543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.596565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.596573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.602095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.602117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.602126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.607727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.607760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.613074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.613095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.613103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.618434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.618455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.618464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.623783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.623804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.623813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.629471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.629492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.629500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.635309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.635331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.635340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.812 [2024-11-20 16:19:35.640624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:34.812 [2024-11-20 16:19:35.640645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-11-20 16:19:35.640653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.646056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.646081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.071 [2024-11-20 16:19:35.646091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.651557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.651580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.071 [2024-11-20 16:19:35.651589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.656885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.656907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.071 [2024-11-20 16:19:35.656915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.662243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.662264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.071 [2024-11-20 16:19:35.662273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.667684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.667716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.071 [2024-11-20 16:19:35.667724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.673078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.673099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.071 [2024-11-20 16:19:35.673107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.071 [2024-11-20 16:19:35.678461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.071 [2024-11-20 16:19:35.678481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.678490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.683834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.683855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.683863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.689133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.689154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.689162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.694412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.694433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.694441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.699692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.699713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.699725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.705032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.705054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.705062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.710366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.710387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.710395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.715632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.715653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.715660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.720982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.721003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.721011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.726267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.726288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.726296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.731613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.731635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.736936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.736966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.736975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.742276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.742296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.742305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.747672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.747697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.747705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.753344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.753366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.753374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.758714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.758735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.758743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.764065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.764086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.764094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.769437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.769458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.774795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.774816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.774824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.780171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.780192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.785485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.785507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.785515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.790841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.790870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.796281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.796304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.796312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.801653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.801674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.801682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.807025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.807046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.807054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.812384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.812405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.812413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.817802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.072 [2024-11-20 16:19:35.817824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.072 [2024-11-20 16:19:35.817832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.072 [2024-11-20 16:19:35.823299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.823320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.823328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.828756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.828777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.828786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.834095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.834116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.834124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.839494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.839515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.839526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.844883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.844904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.844912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.850173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.850194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.850202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.855464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.855485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.855493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.860848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.860869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.860877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.866168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.866189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.866196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.871455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.871476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.871484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.876830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.876849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.876856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.882132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.882153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.882161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.887497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.887518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.887526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.892815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.892836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.892844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.898060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.898081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.898089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.073 [2024-11-20 16:19:35.903452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.073 [2024-11-20 16:19:35.903478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.073 [2024-11-20 16:19:35.903488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.908820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.908844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.908852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.914271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.914295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.914304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.919602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.919624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.919632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.924979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.925001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.925009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.930251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.930273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.930284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.935526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.935547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.935555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.940849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.940871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.940878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.946193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.946215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.946223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.951515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.951537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.951545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.956805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.956827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.956835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.962072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.962093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.962101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.967347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.967368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.967376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.972616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.972638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.972646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.978025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.978049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.978057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.983410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.983431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.983439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.988719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.988741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.988750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.994094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.994116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.994124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:35.999407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:35.999429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:35.999436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.004644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.004666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.009917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.009938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.009946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.015202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.015224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.020520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.020541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.020549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.025817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.025838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.025846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.031156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.031178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.031186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.036517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.036539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.036547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.333 [2024-11-20 16:19:36.041933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.333 [2024-11-20 16:19:36.041962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.333 [2024-11-20 16:19:36.041971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.047333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.047354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.047362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.052685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.052706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.052714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.058080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.058102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.058109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.063467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.063489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.063497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.068813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.068834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.068846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.074120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.074141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.074149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.079481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.079502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.084787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.084808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.084816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.090064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.090086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.090093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.095364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.095385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.095393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.100713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.100734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.100742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.106026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.106048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.106056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.111315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.111337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.111345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.116664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.116689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.116697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.121953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.121974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.121982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.127223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.127244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.127252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.132577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.132598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.132606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.137882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.137904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.137912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.143174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.143195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.143203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.148516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.148538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.153803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.153825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.153833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.159107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.159129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.159136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.334 [2024-11-20 16:19:36.164537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.334 [2024-11-20 16:19:36.164562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.334 [2024-11-20 16:19:36.164571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.169959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.169982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.169991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.175242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.175266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.175275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.180547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.180569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.180577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.185876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.185898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.185907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.191177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.191199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.191207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.196448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.196468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.196476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.201770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.201791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.201799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.207880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.207901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.207914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.213335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.213356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.213364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.218675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.218696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.218704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.224018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.224039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.224047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.229290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.229310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.229318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.234560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.234581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.234589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.239739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.239760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.239768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.242626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.242645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.242654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.247868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.247889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.247898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.253217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.593 [2024-11-20 16:19:36.253241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.593 [2024-11-20 16:19:36.253249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.593 [2024-11-20 16:19:36.258398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.258418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.258426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.263731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.263753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.263761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.268976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.268997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.269005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.274353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.274373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.274382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.279687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.279707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.279715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.285032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.285053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.285061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.290320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.290341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.290349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.295536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.295557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.295566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.300981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.301003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.301012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.306291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.306312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.306320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.311600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.311620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.311629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.316922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.316944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.316957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.322306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.322327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.322335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.327722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.327743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.327752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.333088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.333109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.333117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.338289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.338310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.338319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.343655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.343675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.343686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.348916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.348935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.348944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.354084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.354104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.359422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.359443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.359451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.364766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.364787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.364795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.370165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.370187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.370195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.375414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.375435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.375442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.380754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.380775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.380783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.386059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.386080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.386088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.391328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.391349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.391357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.594 [2024-11-20 16:19:36.396621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.594 [2024-11-20 16:19:36.396643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.594 [2024-11-20 16:19:36.396650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.595 [2024-11-20 16:19:36.401909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.595 [2024-11-20 16:19:36.401930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.595 [2024-11-20 16:19:36.401938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.595 [2024-11-20 16:19:36.407192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.595 [2024-11-20 16:19:36.407213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.595 [2024-11-20 16:19:36.407221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.595 [2024-11-20 16:19:36.412489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.595 [2024-11-20 16:19:36.412509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.595 [2024-11-20 16:19:36.412517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.595 [2024-11-20 16:19:36.417790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.595 [2024-11-20 16:19:36.417811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.595 [2024-11-20 16:19:36.417819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.595 [2024-11-20 16:19:36.423144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.595 [2024-11-20 16:19:36.423166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.595 [2024-11-20 16:19:36.423174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.428547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.428572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.428581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.433984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.434007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.434016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.439244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.439266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.439274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.444572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.444595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.444603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.450018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.450040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.450048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.455243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.455265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.455274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.460585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.460608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.460617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.465899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.465921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.465928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.471240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.471260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.471269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.476427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.476449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.476458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.481666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.481687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.853 [2024-11-20 16:19:36.487007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.853 [2024-11-20 16:19:36.487028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.853 [2024-11-20 16:19:36.487038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.854 [2024-11-20 16:19:36.492354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.492375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.854 [2024-11-20 16:19:36.497754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.497775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.497783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.854 [2024-11-20 16:19:36.503105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.503125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.503134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.854 [2024-11-20 16:19:36.508428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.508449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.508457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.854 [2024-11-20 16:19:36.513713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.513733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.513741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:35.854 [2024-11-20 16:19:36.519077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.519098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.519106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:35.854 5619.50 IOPS, 702.44 MiB/s [2024-11-20T15:19:36.691Z] [2024-11-20 16:19:36.525836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc925f0) 00:26:35.854 [2024-11-20 16:19:36.525857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.854 [2024-11-20 16:19:36.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.854 00:26:35.854 Latency(us) 00:26:35.854 [2024-11-20T15:19:36.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.854 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:35.854 nvme0n1 : 2.00 5620.17 702.52 0.00 0.00 2843.33 990.16 8149.26 00:26:35.854 [2024-11-20T15:19:36.691Z] =================================================================================================================== 00:26:35.854 [2024-11-20T15:19:36.691Z] Total : 5620.17 702.52 0.00 0.00 2843.33 990.16 8149.26 00:26:35.854 { 00:26:35.854 "results": [ 00:26:35.854 { 00:26:35.854 "job": "nvme0n1", 00:26:35.854 "core_mask": "0x2", 00:26:35.854 "workload": "randread", 00:26:35.854 "status": "finished", 00:26:35.854 "queue_depth": 16, 00:26:35.854 "io_size": 131072, 00:26:35.854 "runtime": 2.002608, 00:26:35.854 "iops": 5620.171296629195, 00:26:35.854 "mibps": 702.5214120786494, 00:26:35.854 "io_failed": 0, 00:26:35.854 "io_timeout": 0, 00:26:35.854 "avg_latency_us": 2843.3339743881947, 00:26:35.854 "min_latency_us": 990.1634782608695, 00:26:35.854 "max_latency_us": 8149.2591304347825 00:26:35.854 } 00:26:35.854 ], 00:26:35.854 "core_count": 1 00:26:35.854 } 00:26:35.854 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.854 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.854 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.854 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.854 | .driver_specific 00:26:35.854 | .nvme_error 00:26:35.854 | .status_code 00:26:35.854 | .command_transient_transport_error' 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 364 > 0 )) 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2883443 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2883443 ']' 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2883443 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883443 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883443' 00:26:36.112 killing process with pid 2883443 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2883443 00:26:36.112 Received shutdown signal, test time was about 2.000000 seconds 00:26:36.112 00:26:36.112 Latency(us) 00:26:36.112 [2024-11-20T15:19:36.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.112 [2024-11-20T15:19:36.949Z] =================================================================================================================== 00:26:36.112 [2024-11-20T15:19:36.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.112 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2883443 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2884075 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2884075 /var/tmp/bperf.sock 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2884075 ']' 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:36.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.370 16:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.370 [2024-11-20 16:19:37.008166] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:36.370 [2024-11-20 16:19:37.008217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884075 ] 00:26:36.370 [2024-11-20 16:19:37.082567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.370 [2024-11-20 16:19:37.125393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.628 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.193 nvme0n1 00:26:37.193 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:37.193 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.193 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.193 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.193 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:37.193 16:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.193 Running I/O for 2 seconds... 00:26:37.194 [2024-11-20 16:19:37.956840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede8a8 00:26:37.194 [2024-11-20 16:19:37.957468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:37.957495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:37.966609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef35f0 00:26:37.194 [2024-11-20 16:19:37.967067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:37.967090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:37.976609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee6fa8 00:26:37.194 [2024-11-20 16:19:37.977192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:37.977213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:37.985557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee6738 00:26:37.194 [2024-11-20 16:19:37.986038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:37.986057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:37.995058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eed0b0 00:26:37.194 [2024-11-20 16:19:37.995838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:37.995857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:38.004921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede8a8 00:26:37.194 [2024-11-20 16:19:38.005982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:38.006002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:38.014496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee8d30 00:26:37.194 [2024-11-20 16:19:38.015570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:38.015589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:37.194 [2024-11-20 16:19:38.023815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eff3c8 00:26:37.194 [2024-11-20 16:19:38.024442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.194 [2024-11-20 16:19:38.024462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.033035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1b48 00:26:37.452 [2024-11-20 16:19:38.033550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.033574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.042571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef7100 00:26:37.452 [2024-11-20 16:19:38.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.043387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.052232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eec840 00:26:37.452 [2024-11-20 16:19:38.052798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.052818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.063528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef6458 00:26:37.452 [2024-11-20 16:19:38.065050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.065069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.070167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.452 [2024-11-20 16:19:38.070847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.070866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.079999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef1868 00:26:37.452 [2024-11-20 16:19:38.080818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.080837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.089909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef57b0 00:26:37.452 [2024-11-20 16:19:38.091002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.091022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.099432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee99d8 00:26:37.452 [2024-11-20 16:19:38.100062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.100082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.108316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edece0 00:26:37.452 [2024-11-20 16:19:38.108815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.108854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.118005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eed920 00:26:37.452 [2024-11-20 16:19:38.118884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.118903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.129750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef31b8 00:26:37.452 [2024-11-20 16:19:38.131142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.131161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.136509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef92c0 00:26:37.452 [2024-11-20 16:19:38.137184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.137202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.148753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efdeb0 00:26:37.452 [2024-11-20 16:19:38.150134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.150163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.156254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5ec8 00:26:37.452 [2024-11-20 16:19:38.157145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.157164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.167761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee23b8 00:26:37.452 [2024-11-20 16:19:38.169265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.169283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.174528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efb048 00:26:37.452 [2024-11-20 16:19:38.175302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.175320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.186027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eec408 00:26:37.452 [2024-11-20 16:19:38.187189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.187219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.195091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efda78 00:26:37.452 [2024-11-20 16:19:38.196161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.196181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.204534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebfd0 00:26:37.452 [2024-11-20 16:19:38.205664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.205683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.214089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee0630 00:26:37.452 [2024-11-20 16:19:38.214774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.214796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.223749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eef270 00:26:37.452 [2024-11-20 16:19:38.224686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.224705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.233739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee27f0 00:26:37.452 [2024-11-20 16:19:38.235033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.235053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.242649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1f80 00:26:37.452 [2024-11-20 16:19:38.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.243657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.251961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee0630 00:26:37.452 [2024-11-20 16:19:38.252862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.252881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.262000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef7538 00:26:37.452 [2024-11-20 16:19:38.263198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.263217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.270791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef0350 00:26:37.452 [2024-11-20 16:19:38.271558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.271577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:37.452 [2024-11-20 16:19:38.280141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee95a0 00:26:37.452 [2024-11-20 16:19:38.280954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.452 [2024-11-20 16:19:38.280972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.290488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeee38 00:26:37.714 [2024-11-20 16:19:38.291575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.291597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.301926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeff18 00:26:37.714 [2024-11-20 16:19:38.303516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.303536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.308663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee6fa8 00:26:37.714 [2024-11-20 16:19:38.309510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.309529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.318486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee3060 00:26:37.714 [2024-11-20 16:19:38.319362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.319381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.328278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efa3a0 00:26:37.714 [2024-11-20 16:19:38.329298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.329317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.338276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5a90 00:26:37.714 [2024-11-20 16:19:38.339254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.339274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.347031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef9b30 00:26:37.714 [2024-11-20 16:19:38.347971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.347990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.355894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee4de8 00:26:37.714 [2024-11-20 16:19:38.356621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.356640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.365554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef3e60 00:26:37.714 [2024-11-20 16:19:38.366508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.366528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.376391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef0350 00:26:37.714 [2024-11-20 16:19:38.377850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.377868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.383123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef92c0 00:26:37.714 [2024-11-20 16:19:38.383854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.394704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edfdc0 00:26:37.714 [2024-11-20 16:19:38.395845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.395865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.404548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef3e60 00:26:37.714 [2024-11-20 16:19:38.405772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.405790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.412625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.714 [2024-11-20 16:19:38.413363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.413382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.421972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.714 [2024-11-20 16:19:38.422714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.422733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.431340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.714 [2024-11-20 16:19:38.432077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.432096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.440673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.714 [2024-11-20 16:19:38.441410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.441434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.450008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.714 [2024-11-20 16:19:38.450736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.450755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.460560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eecc78 00:26:37.714 [2024-11-20 16:19:38.461904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.461923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.469220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1f80 00:26:37.714 [2024-11-20 16:19:38.470404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.470424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.478880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee4578 00:26:37.714 [2024-11-20 16:19:38.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.479798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.488527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edf988 00:26:37.714 [2024-11-20 16:19:38.489434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.714 [2024-11-20 16:19:38.489453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:37.714 [2024-11-20 16:19:38.498013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edf988 00:26:37.714 [2024-11-20 16:19:38.498912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.715 [2024-11-20 16:19:38.498932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:37.715 [2024-11-20 16:19:38.507637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeee38 00:26:37.715 [2024-11-20 16:19:38.508409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.715 [2024-11-20 16:19:38.508428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:37.715 [2024-11-20 16:19:38.516159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5ec8 00:26:37.715 [2024-11-20 16:19:38.517035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.715 [2024-11-20 16:19:38.517054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:37.715 [2024-11-20 16:19:38.525593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eed920 00:26:37.715 [2024-11-20 16:19:38.526565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.715 [2024-11-20 16:19:38.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:37.715 [2024-11-20 16:19:38.537162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb760 00:26:37.715 [2024-11-20 16:19:38.538630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.715 [2024-11-20 16:19:38.538649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:37.715 [2024-11-20 16:19:38.544247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef0350 00:26:37.715 [2024-11-20 16:19:38.545028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.715 [2024-11-20 16:19:38.545050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.554654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1f80 00:26:38.023 [2024-11-20 16:19:38.555462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.555485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.565563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1f80 00:26:38.023 [2024-11-20 16:19:38.566273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.576847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1f80 00:26:38.023 [2024-11-20 16:19:38.578112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.578132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.586551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee23b8 00:26:38.023 [2024-11-20 16:19:38.587777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.587797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.596275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee8088 00:26:38.023 [2024-11-20 16:19:38.597493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.597514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.605201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eec840 00:26:38.023 [2024-11-20 16:19:38.606202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.606221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.615191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efa3a0 00:26:38.023 [2024-11-20 16:19:38.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.616597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.624761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edf988 00:26:38.023 [2024-11-20 16:19:38.626119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.626137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.633659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef0350 00:26:38.023 [2024-11-20 16:19:38.634894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.634912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.642837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efeb58 00:26:38.023 [2024-11-20 16:19:38.644112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.644132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.652294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeee38 00:26:38.023 [2024-11-20 16:19:38.653064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.653084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.661157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef0ff8 00:26:38.023 [2024-11-20 16:19:38.662448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.662467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.671556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.023 [2024-11-20 16:19:38.672734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.672754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.681289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee7c50 00:26:38.023 [2024-11-20 16:19:38.682765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.023 [2024-11-20 16:19:38.682783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:38.023 [2024-11-20 16:19:38.691058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeff18 00:26:38.024 [2024-11-20 16:19:38.692699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.692722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.697666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edfdc0 00:26:38.024 [2024-11-20 16:19:38.698409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.698428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.707036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc128 00:26:38.024 [2024-11-20 16:19:38.707942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.707966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.718546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eec408 00:26:38.024 [2024-11-20 16:19:38.719920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.719940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.727296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eef270 00:26:38.024 [2024-11-20 16:19:38.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.728158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.736661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edf550 00:26:38.024 [2024-11-20 16:19:38.737592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.737611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.745799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef6cc8 00:26:38.024 [2024-11-20 16:19:38.746717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.746737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.757441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef35f0 00:26:38.024 [2024-11-20 16:19:38.758831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.758849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.767239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede038 00:26:38.024 [2024-11-20 16:19:38.768787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.768806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.773931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef6cc8 00:26:38.024 [2024-11-20 16:19:38.774594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.774613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.783670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee7818 00:26:38.024 [2024-11-20 16:19:38.784462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.784481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.793457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebfd0 00:26:38.024 [2024-11-20 16:19:38.794353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.794372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.802276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edf550 00:26:38.024 [2024-11-20 16:19:38.803229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.803248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.813946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eec408 00:26:38.024 [2024-11-20 16:19:38.815432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.815450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.820795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efd640 00:26:38.024 [2024-11-20 16:19:38.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.821519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:38.024 [2024-11-20 16:19:38.832715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede038 00:26:38.024 [2024-11-20 16:19:38.833647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.024 [2024-11-20 16:19:38.833670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.842051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebfd0 00:26:38.307 [2024-11-20 16:19:38.842984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.843007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.851995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5658 00:26:38.307 [2024-11-20 16:19:38.853092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.853112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.861859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eee190 00:26:38.307 [2024-11-20 16:19:38.862516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.862539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.872612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee73e0 00:26:38.307 [2024-11-20 16:19:38.873538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.873559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.881784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebfd0 00:26:38.307 [2024-11-20 16:19:38.883124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.883143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.891565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef92c0 00:26:38.307 [2024-11-20 16:19:38.892489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.892509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.900963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efe2e8 00:26:38.307 [2024-11-20 16:19:38.901926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.901945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.910599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eea248 00:26:38.307 [2024-11-20 16:19:38.911570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.911589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.919660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb760 00:26:38.307 [2024-11-20 16:19:38.920510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.920530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.928963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef9b30 00:26:38.307 [2024-11-20 16:19:38.929790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.929810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.939671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2510 00:26:38.307 [2024-11-20 16:19:38.940936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.940962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:38.307 26540.00 IOPS, 103.67 MiB/s [2024-11-20T15:19:39.144Z] [2024-11-20 16:19:38.950597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5658 00:26:38.307 [2024-11-20 16:19:38.952031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.952052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.960383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee23b8 00:26:38.307 [2024-11-20 16:19:38.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.962099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.968821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efe2e8 00:26:38.307 [2024-11-20 16:19:38.969576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.969595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.978374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee0a68 00:26:38.307 [2024-11-20 16:19:38.979479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.979499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.987814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc128 00:26:38.307 [2024-11-20 16:19:38.988904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.988923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:38.996975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee0630 00:26:38.307 [2024-11-20 16:19:38.998053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.307 [2024-11-20 16:19:38.998072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:38.307 [2024-11-20 16:19:39.006593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee38d0 00:26:38.307 [2024-11-20 16:19:39.007663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.007682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.016370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efef90 00:26:38.308 [2024-11-20 16:19:39.017406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.017425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.025182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.026152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.026171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.034775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.035747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.035766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.044139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.045120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.045138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.053469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.054326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.054346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.062841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.063801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.063820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.072221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.073186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.073205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.081524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.082457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.082475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.090798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.091665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.091684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.101331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.308 [2024-11-20 16:19:39.102648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.102666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.109383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc998 00:26:38.308 [2024-11-20 16:19:39.110232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.110251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.308 [2024-11-20 16:19:39.119073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc998 00:26:38.308 [2024-11-20 16:19:39.120100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.308 [2024-11-20 16:19:39.120124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.129303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc998 00:26:38.590 [2024-11-20 16:19:39.130247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.130270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.138747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc998 00:26:38.590 [2024-11-20 16:19:39.139735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.139755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.148525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc998 00:26:38.590 [2024-11-20 16:19:39.149464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.149486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.159014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede8a8 00:26:38.590 [2024-11-20 16:19:39.159841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.159860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.167924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee12d8 00:26:38.590 [2024-11-20 16:19:39.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.169251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.176743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef8a50 00:26:38.590 [2024-11-20 16:19:39.177457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.177476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.186397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee3d08 00:26:38.590 [2024-11-20 16:19:39.186959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.186982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.196955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1710 00:26:38.590 [2024-11-20 16:19:39.198141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.198160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.204926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef5378 00:26:38.590 [2024-11-20 16:19:39.205621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.205640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.214265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebb98 00:26:38.590 [2024-11-20 16:19:39.214943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.214965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.223584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efac10 00:26:38.590 [2024-11-20 16:19:39.224305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.224325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.233336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eee190 00:26:38.590 [2024-11-20 16:19:39.234283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.234302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.243349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eed0b0 00:26:38.590 [2024-11-20 16:19:39.244410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.244429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:38.590 [2024-11-20 16:19:39.252257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee6738 00:26:38.590 [2024-11-20 16:19:39.252913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.590 [2024-11-20 16:19:39.252931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.262709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee8d30 00:26:38.591 [2024-11-20 16:19:39.263858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.263878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.272524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef3a28 00:26:38.591 [2024-11-20 16:19:39.273821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.273840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.282016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebb98 00:26:38.591 [2024-11-20 16:19:39.283303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.283322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.290126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef4f40 00:26:38.591 [2024-11-20 16:19:39.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.291426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.298180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efe720 00:26:38.591 [2024-11-20 16:19:39.298830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.298848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.307904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efeb58 00:26:38.591 [2024-11-20 16:19:39.308682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.308701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.318289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5658 00:26:38.591 [2024-11-20 16:19:39.319202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.319221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.327903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede470 00:26:38.591 [2024-11-20 16:19:39.328974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.328993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.337155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016edece0 00:26:38.591 [2024-11-20 16:19:39.338328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.338347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.345877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef6020 00:26:38.591 [2024-11-20 16:19:39.346671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.346690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.355361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef9f68 00:26:38.591 [2024-11-20 16:19:39.355988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.365135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeee38 00:26:38.591 [2024-11-20 16:19:39.365852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.365871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.373699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee38d0 00:26:38.591 [2024-11-20 16:19:39.374575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.374594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.383863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee3d08 00:26:38.591 [2024-11-20 16:19:39.384688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.384707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.392628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee4140 00:26:38.591 [2024-11-20 16:19:39.393860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.393879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:38.591 [2024-11-20 16:19:39.400665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef1868 00:26:38.591 [2024-11-20 16:19:39.401310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.591 [2024-11-20 16:19:39.401328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.411243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.412098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.412120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.421697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.422525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.422545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.431407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.432299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.432325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.441622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.442438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.442458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.451042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.451888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.451907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.460431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.461245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.461265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.469687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.470503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.470522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.479063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.479907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.479927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.488375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.489203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.489222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.499145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef46d0 00:26:38.890 [2024-11-20 16:19:39.500444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.500462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.509092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee23b8 00:26:38.890 [2024-11-20 16:19:39.510532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.510550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.515917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef2d80 00:26:38.890 [2024-11-20 16:19:39.516621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.516641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.525806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.890 [2024-11-20 16:19:39.526558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.526578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.535843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.890 [2024-11-20 16:19:39.536676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.536697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.545145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.890 [2024-11-20 16:19:39.545987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.546006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.554629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.890 [2024-11-20 16:19:39.555458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.555478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.890 [2024-11-20 16:19:39.565219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.890 [2024-11-20 16:19:39.566411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.890 [2024-11-20 16:19:39.566430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.574709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee27f0 00:26:38.891 [2024-11-20 16:19:39.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.576027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.583899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee73e0 00:26:38.891 [2024-11-20 16:19:39.585226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.585246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.592421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeaab8 00:26:38.891 [2024-11-20 16:19:39.593755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.593775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.601944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5a90 00:26:38.891 [2024-11-20 16:19:39.602789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.602809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.610909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eebb98 00:26:38.891 [2024-11-20 16:19:39.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.611666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.619641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee0a68 00:26:38.891 [2024-11-20 16:19:39.620469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.620488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.629440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef7970 00:26:38.891 [2024-11-20 16:19:39.630336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.630356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.638860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eee5c8 00:26:38.891 [2024-11-20 16:19:39.639345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.639365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.648457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede8a8 00:26:38.891 [2024-11-20 16:19:39.649374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.649392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.657885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.891 [2024-11-20 16:19:39.658395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.658414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.668159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede470 00:26:38.891 [2024-11-20 16:19:39.669312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.669331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.677475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eea248 00:26:38.891 [2024-11-20 16:19:39.678347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.678369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.686263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eeb328 00:26:38.891 [2024-11-20 16:19:39.687016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.687035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.891 [2024-11-20 16:19:39.695773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede470 00:26:38.891 [2024-11-20 16:19:39.696640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.891 [2024-11-20 16:19:39.696663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.706516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efe720 00:26:39.153 [2024-11-20 16:19:39.707456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.707480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.715430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef8e88 00:26:39.153 [2024-11-20 16:19:39.716230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.716251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.724546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1b48 00:26:39.153 [2024-11-20 16:19:39.725259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.725279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.736293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efd640 00:26:39.153 [2024-11-20 16:19:39.737538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.737561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.745985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efcdd0 00:26:39.153 [2024-11-20 16:19:39.747282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.747302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.754322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee95a0 00:26:39.153 [2024-11-20 16:19:39.755629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.755650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.763439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee2c28 00:26:39.153 [2024-11-20 16:19:39.764182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.764202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.773517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1710 00:26:39.153 [2024-11-20 16:19:39.774461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.774481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.784000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efe2e8 00:26:39.153 [2024-11-20 16:19:39.785091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.785111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.792768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee12d8 00:26:39.153 [2024-11-20 16:19:39.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.793793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.802550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef96f8 00:26:39.153 [2024-11-20 16:19:39.803626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.803645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.812348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc998 00:26:39.153 [2024-11-20 16:19:39.813615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.813634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.821178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eed0b0 00:26:39.153 [2024-11-20 16:19:39.822245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.153 [2024-11-20 16:19:39.822265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.153 [2024-11-20 16:19:39.830559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef8618 00:26:39.154 [2024-11-20 16:19:39.831517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.831536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.839814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc128 00:26:39.154 [2024-11-20 16:19:39.840792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.840812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.850976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016efc128 00:26:39.154 [2024-11-20 16:19:39.852425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.852444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.858427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ede8a8 00:26:39.154 [2024-11-20 16:19:39.859425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.859444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.867997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eed4e8 00:26:39.154 [2024-11-20 16:19:39.869032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.869050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.877848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef3e60 00:26:39.154 [2024-11-20 16:19:39.878850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.878869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.887380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef81e0 00:26:39.154 [2024-11-20 16:19:39.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.888425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.896960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee1710 00:26:39.154 [2024-11-20 16:19:39.898184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.898204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.904416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef3e60 00:26:39.154 [2024-11-20 16:19:39.905164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.905183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.915400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee5a90 00:26:39.154 [2024-11-20 16:19:39.916497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.916517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.926307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016eefae0 00:26:39.154 [2024-11-20 16:19:39.927892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.927915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.936403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ee23b8 00:26:39.154 [2024-11-20 16:19:39.937950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.937969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.154 [2024-11-20 16:19:39.943040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2002180) with pdu=0x200016ef96f8 00:26:39.154 [2024-11-20 16:19:39.943759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.154 [2024-11-20 16:19:39.943777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.154 26778.00 IOPS, 104.60 MiB/s 00:26:39.154 Latency(us) 00:26:39.154 [2024-11-20T15:19:39.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.154 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:39.154 nvme0n1 : 2.01 26775.05 104.59 0.00 0.00 4774.01 1880.60 14816.83 00:26:39.154 [2024-11-20T15:19:39.991Z] =================================================================================================================== 00:26:39.154 [2024-11-20T15:19:39.991Z] Total : 26775.05 104.59 0.00 0.00 4774.01 1880.60 14816.83 00:26:39.154 { 00:26:39.154 "results": [ 00:26:39.154 { 00:26:39.154 "job": "nvme0n1", 00:26:39.154 "core_mask": "0x2", 00:26:39.154 "workload": "randwrite", 00:26:39.154 "status": "finished", 00:26:39.154 "queue_depth": 128, 00:26:39.154 "io_size": 4096, 00:26:39.154 "runtime": 2.005001, 00:26:39.154 "iops": 26775.048990000505, 00:26:39.154 "mibps": 104.59003511718947, 00:26:39.154 "io_failed": 0, 00:26:39.154 "io_timeout": 0, 00:26:39.154 "avg_latency_us": 4774.013706488534, 00:26:39.154 "min_latency_us": 1880.5982608695651, 00:26:39.154 "max_latency_us": 14816.834782608696 00:26:39.154 } 00:26:39.154 ], 00:26:39.154 "core_count": 1 00:26:39.154 } 00:26:39.154 16:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:39.154 16:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:39.154 16:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:39.154 | .driver_specific 00:26:39.154 | .nvme_error 00:26:39.154 | .status_code 00:26:39.154 | .command_transient_transport_error' 00:26:39.154 16:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:39.411 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2884075 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2884075 ']' 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2884075 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2884075 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2884075' 00:26:39.412 killing process with pid 2884075 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2884075 00:26:39.412 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.412 00:26:39.412 Latency(us) 00:26:39.412 [2024-11-20T15:19:40.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.412 [2024-11-20T15:19:40.249Z] =================================================================================================================== 00:26:39.412 [2024-11-20T15:19:40.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.412 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2884075 00:26:39.669 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2884613 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2884613 /var/tmp/bperf.sock 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2884613 ']' 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.670 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.670 [2024-11-20 16:19:40.441159] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:39.670 [2024-11-20 16:19:40.441209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884613 ] 00:26:39.670 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.670 Zero copy mechanism will not be used. 00:26:39.927 [2024-11-20 16:19:40.515624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.927 [2024-11-20 16:19:40.558458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.928 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.928 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:39.928 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.928 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:40.185 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:40.185 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.185 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.185 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.185 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.185 16:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.444 nvme0n1 00:26:40.444 16:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:40.444 16:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.444 16:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.444 16:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.444 16:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:40.444 16:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.444 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.444 Zero copy mechanism will not be used. 00:26:40.444 Running I/O for 2 seconds... 00:26:40.444 [2024-11-20 16:19:41.203886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.204016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.204046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.210503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.210568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.210590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.215306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.215397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.220276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.220348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.220368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.224689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.224756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.224775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.229299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.229373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.229393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.233897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.233960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.233995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.238502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.238574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.238593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.242996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.243079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.243097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.247561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.247618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.247637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.252221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.252284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.256811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.256915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-11-20 16:19:41.256933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.444 [2024-11-20 16:19:41.261831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.444 [2024-11-20 16:19:41.261926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-11-20 16:19:41.261945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.445 [2024-11-20 16:19:41.266468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.445 [2024-11-20 16:19:41.266543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-11-20 16:19:41.266562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.445 [2024-11-20 16:19:41.271063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.445 [2024-11-20 16:19:41.271147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-11-20 16:19:41.271166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.445 [2024-11-20 16:19:41.275763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.445 [2024-11-20 16:19:41.275822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-11-20 16:19:41.275844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.702 [2024-11-20 16:19:41.280440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.702 [2024-11-20 16:19:41.280524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.702 [2024-11-20 16:19:41.280546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.702 [2024-11-20 16:19:41.285314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.702 [2024-11-20 16:19:41.285382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.702 [2024-11-20 16:19:41.285403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.702 [2024-11-20 16:19:41.289998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.290077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.290096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.294624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.294696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.294715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.299032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.299179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.299208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.303962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.304124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.304142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.310299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.310465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.310488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.316234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.316374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.316392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.322673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.322791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.322810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.329365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.329533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.329551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.335732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.335885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.342160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.342331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.342350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.348942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.349084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.349103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.355772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.355935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.355960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.362242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.362405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.362423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.368511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.368660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.368678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.374935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.375103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.375137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.381363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.381532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.381550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.387886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.388043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.388062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.394427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.394590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.394608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.400650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.400821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.400839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.406936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.407099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.407117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.413493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.413664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.413682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.419761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.419933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.419956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.426081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.426213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.426231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.433033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.433184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.703 [2024-11-20 16:19:41.433202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.703 [2024-11-20 16:19:41.439581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.703 [2024-11-20 16:19:41.439732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.439752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.444812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.444890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.444908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.449654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.449732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.449750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.454223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.454288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.454306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.459080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.459133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.459151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.464803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.464877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.464897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.469916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.469994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.470020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.474677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.474754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.474772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.479794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.479882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.479901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.485253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.485375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.485394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.490841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.490906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.490924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.496445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.496499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.496517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.501831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.501925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.501943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.507425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.507498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.507517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.515275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.515410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.515428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.522906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.522993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.523012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.529430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.704 [2024-11-20 16:19:41.535337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.704 [2024-11-20 16:19:41.535448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.704 [2024-11-20 16:19:41.535469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.541208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.541267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.541292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.546646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.546776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.546797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.552261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.552400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.552420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.557544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.557647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.557666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.562620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.562688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.562708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.568440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.568497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.573729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.573863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.573881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.579315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.579451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.579470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.584470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.584625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.584643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.590044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.590121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.590140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.595313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.595394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.595412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.600128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.600193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.600211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.604777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.604835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.604854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.610281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.610412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.610431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.615343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.615397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.615419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.620771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.620827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.620845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.625773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.625854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.625872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.631123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.631217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.637102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.637161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.637179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.642353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.642423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.642441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.647520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.647585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.647604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.964 [2024-11-20 16:19:41.652240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.964 [2024-11-20 16:19:41.652304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.964 [2024-11-20 16:19:41.652323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.656820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.656876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.656894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.661368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.661439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.661458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.666522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.666690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.666709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.672748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.672907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.672926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.679159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.679291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.679310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.685653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.685805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.685824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.692231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.692402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.692421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.699587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.699757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.699775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.705838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.706025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.706043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.712243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.712399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.712418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.718893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.719071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.719089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.725541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.725680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.725698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.732403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.732557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.732575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.738735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.738912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.738930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.745435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.745591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.752764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.752935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.752961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.759261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.759420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.759439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.765513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.765599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.765616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.771571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.771728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.771751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.777842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.778027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.778047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.784437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.784622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.784640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.965 [2024-11-20 16:19:41.791244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:40.965 [2024-11-20 16:19:41.791421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.965 [2024-11-20 16:19:41.791439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.798539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.798702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.798724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.805437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.805520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.805541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.813017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.813156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.813176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.819911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.820239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.820259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.827096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.827479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.827498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.833721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.834071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.834091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.840371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.840686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.840707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.847817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.847923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.847942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.853139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.225 [2024-11-20 16:19:41.853402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.225 [2024-11-20 16:19:41.853422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.225 [2024-11-20 16:19:41.857802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.858090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.858109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.862391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.862685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.862705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.866816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.867112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.867131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.871220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.871491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.871510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.875854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.876147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.880577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.880848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.880867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.885800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.886096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.886116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.890869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.891159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.891179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.896141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.896405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.896424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.900817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.901082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.901101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.905349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.905608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.905627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.909910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.910187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.910206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.915131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.915368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.920115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.920376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.920400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.925218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.925451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.925471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.930598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.930862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.930881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.935538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.935802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.935821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.940317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.940564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.940584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.945116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.945353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.945372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.950040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.950313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.950333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.955276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.955522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.955542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.960422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.226 [2024-11-20 16:19:41.960699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.226 [2024-11-20 16:19:41.960719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.226 [2024-11-20 16:19:41.965189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.965435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.965456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.970081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.970342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.970362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.975208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.975472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.975491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.981368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.981638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.981657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.986554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.986809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.986829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.991071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.991331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.991351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.995474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.995743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.995763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:41.999719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:41.999973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:41.999993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.004134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.004387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.004408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.008603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.008889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.008908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.013080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.013348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.017459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.017712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.017731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.021649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.021909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.021929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.026148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.026423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.026442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.030672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.030954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.030974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.035521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.035767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.035786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.040682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.040938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.040965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.045180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.045467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.045490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.049665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.049916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.049935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.054053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.054301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.054320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.227 [2024-11-20 16:19:42.058634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.227 [2024-11-20 16:19:42.058878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.227 [2024-11-20 16:19:42.058901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.063057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.063296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.063319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.067368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.067640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.067663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.071693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.071960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.071980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.076145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.076425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.076445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.081057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.081326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.081347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.085682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.085958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.085978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.090094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.090375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.090395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.094620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.094893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.094913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.099299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.099549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.099568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.103827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.104079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.104098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.108064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.108331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.108352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.112887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.113230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.113250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.118974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.119311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.119331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.124121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.124368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.124388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.488 [2024-11-20 16:19:42.128908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.488 [2024-11-20 16:19:42.129174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.488 [2024-11-20 16:19:42.129194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.133391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.133652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.133671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.138572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.138912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.138931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.144792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.145174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.145194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.150942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.151295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.151331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.157304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.157660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.157679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.163430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.163767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.163786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.169501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.169872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.174563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.174810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.174834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.178901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.179194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.179214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.183204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.183490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.183510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.187474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.187750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.187769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.191743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.192018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.192038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.195993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.196277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.196296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.489 5688.00 IOPS, 711.00 MiB/s [2024-11-20T15:19:42.326Z] [2024-11-20 16:19:42.201601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.201857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.201877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.205858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.206119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.206137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.210315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.210575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.210595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.214876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.215133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.215153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.219959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.220224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.220243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.225104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.225371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.225391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.229840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.230117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.230137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.234413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.234662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.234682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.239039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.239297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.243364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.489 [2024-11-20 16:19:42.243646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.489 [2024-11-20 16:19:42.243667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.489 [2024-11-20 16:19:42.247720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.248001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.248021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.252119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.252375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.252395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.257001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.257249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.257268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.262102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.262351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.262371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.266969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.267246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.267266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.271840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.272078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.272098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.276942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.277206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.277225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.281961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.282234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.282253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.286957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.287229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.287248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.292013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.292272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.292292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.296477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.296727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.296750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.301133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.301389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.301408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.306239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.306520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.306540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.311403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.311650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.311669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.315958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.316204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.316223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.490 [2024-11-20 16:19:42.320473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.490 [2024-11-20 16:19:42.320745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.490 [2024-11-20 16:19:42.320768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.324876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.325133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.325155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.329282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.329536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.333695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.333977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.333998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.338266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.338526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.338546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.342734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.342994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.343014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.347165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.347429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.347449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.351582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.351839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.351858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.355903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.356177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.360347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.360613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.360633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.365152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.365424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.365444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.370304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.370556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.370575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.375459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.375727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.375747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.380548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.380809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.380830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.385754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.386017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.386037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.391091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.391363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.391383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.395944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.396214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.396234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.401323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.401593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.406460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.406727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.406747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.411331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.411584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.749 [2024-11-20 16:19:42.411605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.749 [2024-11-20 16:19:42.415890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.749 [2024-11-20 16:19:42.416182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.420223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.420475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.420500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.424627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.424886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.424907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.429147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.429398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.429418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.433717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.433982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.438219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.438497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.438516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.442620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.442893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.442913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.447184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.447489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.447509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.452248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.452490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.452510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.457420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.457663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.457683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.462315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.462583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.462603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.466938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.467196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.467216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.471589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.471843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.471864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.476078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.476326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.476346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.480484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.480747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.480767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.484920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.485175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.485195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.489749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.490043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.490063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.495006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.495269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.495289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.500506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.500783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.500803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.507098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.507299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.507319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.513776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.514147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.514167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.520422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.520758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.520778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.527577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.527896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.527916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.535538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.535900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.541441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.541677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.541696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.750 [2024-11-20 16:19:42.547498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.750 [2024-11-20 16:19:42.547760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.750 [2024-11-20 16:19:42.547780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.552196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.552462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.552482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.556614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.556855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.556879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.561214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.561485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.561505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.565514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.565791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.565811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.569852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.570114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.570133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.574068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.574328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.574347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.578396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.578675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.751 [2024-11-20 16:19:42.582842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:41.751 [2024-11-20 16:19:42.583120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.751 [2024-11-20 16:19:42.583149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.010 [2024-11-20 16:19:42.587518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.010 [2024-11-20 16:19:42.587779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.010 [2024-11-20 16:19:42.587802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.010 [2024-11-20 16:19:42.591845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.010 [2024-11-20 16:19:42.592132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.010 [2024-11-20 16:19:42.592154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.010 [2024-11-20 16:19:42.596340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.010 [2024-11-20 16:19:42.596592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.010 [2024-11-20 16:19:42.596613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.010 [2024-11-20 16:19:42.601322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.010 [2024-11-20 16:19:42.601581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.010 [2024-11-20 16:19:42.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.010 [2024-11-20 16:19:42.606223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.010 [2024-11-20 16:19:42.606466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.010 [2024-11-20 16:19:42.606486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.010 [2024-11-20 16:19:42.611302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.611572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.611592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.616694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.616931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.616956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.621568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.621815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.621835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.626735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.626996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.627015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.631617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.631894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.631913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.636529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.636782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.636802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.641699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.641946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.641972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.647281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.647532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.647551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.652146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.652408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.652427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.657411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.657662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.662182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.662451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.662471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.667037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.667283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.667302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.672412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.672650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.672670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.678156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.678403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.678423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.683091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.683362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.683386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.688272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.688357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.688375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.692985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.693220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.693240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.698146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.698390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.698410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.703554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.703806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.703827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.708914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.709154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.709174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.713829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.714102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.714122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.718738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.718989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.719010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.723927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.724184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.724205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.728613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.011 [2024-11-20 16:19:42.728870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.011 [2024-11-20 16:19:42.728889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.011 [2024-11-20 16:19:42.734361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.734610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.734631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.739701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.739972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.739992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.744662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.744920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.744939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.749554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.749844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.754968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.755223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.755243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.759814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.760067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.760087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.764762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.765025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.765045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.769827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.770116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.774758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.775030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.775051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.779730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.780237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.780257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.785180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.785427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.785447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.790072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.790308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.790327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.795414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.795665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.800304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.800555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.800575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.805219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.805486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.805506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.810721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.810977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.810997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.815525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.815785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.815809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.820324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.820570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.820589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.825639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.825898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.825918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.830628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.830878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.830898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.835546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.835790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.835810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.012 [2024-11-20 16:19:42.840546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.012 [2024-11-20 16:19:42.840662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.012 [2024-11-20 16:19:42.840684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.845301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.845559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.845582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.850993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.851258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.851281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.855852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.856138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.856159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.861338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.861584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.861604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.866316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.866583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.871062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.871314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.871333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.876273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.876547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.876566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.880808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.881079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.881098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.885512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.885757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.885777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.890555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.890823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.890843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.895484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.895727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.895746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.900110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.900369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.900388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.904569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.904835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.908905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.909167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.909188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.913577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.913831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.913851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.918029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.918283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.273 [2024-11-20 16:19:42.918303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.273 [2024-11-20 16:19:42.922760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.273 [2024-11-20 16:19:42.923023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.923043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.927285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.927526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.927545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.931642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.931892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.931911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.935980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.936256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.936276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.940302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.940558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.940582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.944538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.944813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.948820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.949094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.949113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.953106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.953385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.953405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.957650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.957938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.957964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.962433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.962682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.962701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.967442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.967709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.967729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.972798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.973053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.973073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.978410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.978662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.978684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.983121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.983374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.983394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.987669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.987924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.987944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.992170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.992456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.992476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:42.996693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:42.996940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:42.996966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.001190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.001468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.001488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.005645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.005902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.005922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.010192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.010469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.010488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.014583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.014871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.019006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.019284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.019304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.023405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.023683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.023702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.027787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.028074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.028094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.032115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.032399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.032418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.036476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.036727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.036746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.040805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.041069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.274 [2024-11-20 16:19:43.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.274 [2024-11-20 16:19:43.045163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.274 [2024-11-20 16:19:43.045437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.045456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.049778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.050035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.050054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.054407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.054688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.054707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.059062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.059331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.059355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.063701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.063977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.063997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.068310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.068565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.068584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.072833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.073100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.073120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.077365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.077615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.077634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.081883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.082140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.082159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.086498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.086799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.086819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.090941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.091177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.091197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.095271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.095512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.095532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.099607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.099843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.099862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.275 [2024-11-20 16:19:43.103989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.275 [2024-11-20 16:19:43.104242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.275 [2024-11-20 16:19:43.104265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.534 [2024-11-20 16:19:43.108411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.534 [2024-11-20 16:19:43.108626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.534 [2024-11-20 16:19:43.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.534 [2024-11-20 16:19:43.112662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.534 [2024-11-20 16:19:43.112898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.534 [2024-11-20 16:19:43.112920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.534 [2024-11-20 16:19:43.116996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.117232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.117252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.121260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.121480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.121500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.125423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.125634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.125654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.129494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.129710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.129729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.133603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.133841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.133861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.137694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.137912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.137932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.141889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.142107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.142126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.146604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.146926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.146946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.152239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.152525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.152545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.158184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.158476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.158496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.163767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.164062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.164081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.169357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.169668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.169688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.174933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.175260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.175279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.180798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.181101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.181124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.186634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.186921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.186941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.192329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.192645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.192664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.535 [2024-11-20 16:19:43.198432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20024c0) with pdu=0x200016eff3c8 00:26:42.535 [2024-11-20 16:19:43.198760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.535 [2024-11-20 16:19:43.198781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.535 6035.00 IOPS, 754.38 MiB/s 00:26:42.535 Latency(us) 00:26:42.535 [2024-11-20T15:19:43.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.535 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:42.535 nvme0n1 : 2.00 6031.04 753.88 0.00 0.00 2648.22 1951.83 11910.46 00:26:42.535 [2024-11-20T15:19:43.372Z] =================================================================================================================== 00:26:42.535 [2024-11-20T15:19:43.372Z] Total : 6031.04 753.88 0.00 0.00 2648.22 1951.83 11910.46 00:26:42.535 { 00:26:42.535 "results": [ 00:26:42.535 { 00:26:42.535 "job": "nvme0n1", 00:26:42.535 "core_mask": "0x2", 00:26:42.535 "workload": "randwrite", 00:26:42.535 "status": "finished", 00:26:42.535 "queue_depth": 16, 00:26:42.535 "io_size": 131072, 00:26:42.535 "runtime": 2.003965, 00:26:42.535 "iops": 6031.04345634779, 00:26:42.535 "mibps": 753.8804320434738, 00:26:42.535 "io_failed": 0, 00:26:42.535 "io_timeout": 0, 00:26:42.535 "avg_latency_us": 2648.2240660771713, 00:26:42.535 "min_latency_us": 1951.8330434782608, 00:26:42.535 "max_latency_us": 11910.455652173912 00:26:42.535 } 00:26:42.535 ], 00:26:42.535 "core_count": 1 00:26:42.535 } 00:26:42.535 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.535 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.535 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.535 | .driver_specific 00:26:42.535 | .nvme_error 00:26:42.535 | .status_code 00:26:42.535 | .command_transient_transport_error' 00:26:42.535 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 390 > 0 )) 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2884613 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2884613 ']' 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2884613 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2884613 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2884613' 00:26:42.794 killing process with pid 2884613 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2884613 00:26:42.794 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.794 00:26:42.794 Latency(us) 00:26:42.794 [2024-11-20T15:19:43.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.794 [2024-11-20T15:19:43.631Z] =================================================================================================================== 00:26:42.794 [2024-11-20T15:19:43.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.794 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2884613 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2882942 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2882942 ']' 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2882942 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882942 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882942' 00:26:43.053 killing process with pid 2882942 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2882942 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2882942 00:26:43.053 00:26:43.053 real 0m13.964s 00:26:43.053 user 0m26.840s 00:26:43.053 sys 0m4.472s 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.053 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.053 ************************************ 00:26:43.053 END TEST nvmf_digest_error 00:26:43.053 ************************************ 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.313 rmmod nvme_tcp 00:26:43.313 rmmod nvme_fabrics 00:26:43.313 rmmod nvme_keyring 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2882942 ']' 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2882942 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2882942 ']' 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2882942 00:26:43.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2882942) - No such process 00:26:43.313 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2882942 is not found' 00:26:43.314 Process with pid 2882942 is not found 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.314 16:19:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.220 16:19:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:45.220 00:26:45.220 real 0m36.590s 00:26:45.220 user 0m56.005s 00:26:45.220 sys 0m13.574s 00:26:45.220 16:19:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.220 16:19:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.220 ************************************ 00:26:45.220 END TEST nvmf_digest 00:26:45.220 ************************************ 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.480 ************************************ 00:26:45.480 START TEST nvmf_bdevperf 00:26:45.480 ************************************ 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:45.480 * Looking for test storage... 00:26:45.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:45.480 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.481 --rc genhtml_branch_coverage=1 00:26:45.481 --rc genhtml_function_coverage=1 00:26:45.481 --rc genhtml_legend=1 00:26:45.481 --rc geninfo_all_blocks=1 00:26:45.481 --rc geninfo_unexecuted_blocks=1 00:26:45.481 00:26:45.481 ' 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.481 --rc genhtml_branch_coverage=1 00:26:45.481 --rc genhtml_function_coverage=1 00:26:45.481 --rc genhtml_legend=1 00:26:45.481 --rc geninfo_all_blocks=1 00:26:45.481 --rc geninfo_unexecuted_blocks=1 00:26:45.481 00:26:45.481 ' 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.481 --rc genhtml_branch_coverage=1 00:26:45.481 --rc genhtml_function_coverage=1 00:26:45.481 --rc genhtml_legend=1 00:26:45.481 --rc geninfo_all_blocks=1 00:26:45.481 --rc geninfo_unexecuted_blocks=1 00:26:45.481 00:26:45.481 ' 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.481 --rc genhtml_branch_coverage=1 00:26:45.481 --rc genhtml_function_coverage=1 00:26:45.481 --rc genhtml_legend=1 00:26:45.481 --rc geninfo_all_blocks=1 00:26:45.481 --rc geninfo_unexecuted_blocks=1 00:26:45.481 00:26:45.481 ' 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.481 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.741 16:19:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.310 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:52.311 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:52.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:52.311 Found net devices under 0000:86:00.0: cvl_0_0 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:52.311 Found net devices under 0000:86:00.1: cvl_0_1 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.311 16:19:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:52.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:26:52.311 00:26:52.311 --- 10.0.0.2 ping statistics --- 00:26:52.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.311 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:26:52.311 00:26:52.311 --- 10.0.0.1 ping statistics --- 00:26:52.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.311 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2888614 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2888614 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2888614 ']' 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.311 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 [2024-11-20 16:19:52.312916] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:52.312 [2024-11-20 16:19:52.312978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.312 [2024-11-20 16:19:52.393349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:52.312 [2024-11-20 16:19:52.437247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.312 [2024-11-20 16:19:52.437284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.312 [2024-11-20 16:19:52.437291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.312 [2024-11-20 16:19:52.437297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.312 [2024-11-20 16:19:52.437304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.312 [2024-11-20 16:19:52.438705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.312 [2024-11-20 16:19:52.438810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.312 [2024-11-20 16:19:52.438811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 [2024-11-20 16:19:52.572638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 Malloc0 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 [2024-11-20 16:19:52.635643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:52.312 { 00:26:52.312 "params": { 00:26:52.312 "name": "Nvme$subsystem", 00:26:52.312 "trtype": "$TEST_TRANSPORT", 00:26:52.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.312 "adrfam": "ipv4", 00:26:52.312 "trsvcid": "$NVMF_PORT", 00:26:52.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.312 "hdgst": ${hdgst:-false}, 00:26:52.312 "ddgst": ${ddgst:-false} 00:26:52.312 }, 00:26:52.312 "method": "bdev_nvme_attach_controller" 00:26:52.312 } 00:26:52.312 EOF 00:26:52.312 )") 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:52.312 16:19:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:52.312 "params": { 00:26:52.312 "name": "Nvme1", 00:26:52.312 "trtype": "tcp", 00:26:52.312 "traddr": "10.0.0.2", 00:26:52.312 "adrfam": "ipv4", 00:26:52.312 "trsvcid": "4420", 00:26:52.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.312 "hdgst": false, 00:26:52.312 "ddgst": false 00:26:52.312 }, 00:26:52.312 "method": "bdev_nvme_attach_controller" 00:26:52.312 }' 00:26:52.312 [2024-11-20 16:19:52.686866] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:52.312 [2024-11-20 16:19:52.686910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888671 ] 00:26:52.312 [2024-11-20 16:19:52.762072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.312 [2024-11-20 16:19:52.803485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.569 Running I/O for 1 seconds... 00:26:53.500 11096.00 IOPS, 43.34 MiB/s 00:26:53.500 Latency(us) 00:26:53.500 [2024-11-20T15:19:54.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.500 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:53.500 Verification LBA range: start 0x0 length 0x4000 00:26:53.500 Nvme1n1 : 1.01 11143.51 43.53 0.00 0.00 11433.38 1994.57 12936.24 00:26:53.500 [2024-11-20T15:19:54.337Z] =================================================================================================================== 00:26:53.500 [2024-11-20T15:19:54.337Z] Total : 11143.51 43.53 0.00 0.00 11433.38 1994.57 12936.24 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2889042 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:53.757 { 00:26:53.757 "params": { 00:26:53.757 "name": "Nvme$subsystem", 00:26:53.757 "trtype": "$TEST_TRANSPORT", 00:26:53.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.757 "adrfam": "ipv4", 00:26:53.757 "trsvcid": "$NVMF_PORT", 00:26:53.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.757 "hdgst": ${hdgst:-false}, 00:26:53.757 "ddgst": ${ddgst:-false} 00:26:53.757 }, 00:26:53.757 "method": "bdev_nvme_attach_controller" 00:26:53.757 } 00:26:53.757 EOF 00:26:53.757 )") 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:53.757 16:19:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:53.757 "params": { 00:26:53.757 "name": "Nvme1", 00:26:53.757 "trtype": "tcp", 00:26:53.757 "traddr": "10.0.0.2", 00:26:53.757 "adrfam": "ipv4", 00:26:53.757 "trsvcid": "4420", 00:26:53.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.757 "hdgst": false, 00:26:53.757 "ddgst": false 00:26:53.757 }, 00:26:53.757 "method": "bdev_nvme_attach_controller" 00:26:53.757 }' 00:26:53.757 [2024-11-20 16:19:54.385047] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:53.757 [2024-11-20 16:19:54.385096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889042 ] 00:26:53.757 [2024-11-20 16:19:54.459414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.757 [2024-11-20 16:19:54.500681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.014 Running I/O for 15 seconds... 00:26:55.876 11083.00 IOPS, 43.29 MiB/s [2024-11-20T15:19:57.652Z] 10989.00 IOPS, 42.93 MiB/s [2024-11-20T15:19:57.652Z] 16:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2888614 00:26:56.815 16:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:56.815 [2024-11-20 16:19:57.353776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.815 [2024-11-20 16:19:57.353810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.815 [2024-11-20 16:19:57.353837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.815 [2024-11-20 16:19:57.353856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.815 [2024-11-20 16:19:57.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.815 [2024-11-20 16:19:57.353891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.815 [2024-11-20 16:19:57.353906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.815 [2024-11-20 16:19:57.353921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.815 [2024-11-20 16:19:57.353937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.815 [2024-11-20 16:19:57.353958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.815 [2024-11-20 16:19:57.353975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.353990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.815 [2024-11-20 16:19:57.353998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.354008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.815 [2024-11-20 16:19:57.354015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.815 [2024-11-20 16:19:57.354025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.816 [2024-11-20 16:19:57.354494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.816 [2024-11-20 16:19:57.354502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.817 [2024-11-20 16:19:57.354901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.817 [2024-11-20 16:19:57.354916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.354990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.817 [2024-11-20 16:19:57.354996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.817 [2024-11-20 16:19:57.355005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.818 [2024-11-20 16:19:57.355489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.818 [2024-11-20 16:19:57.355496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.819 [2024-11-20 16:19:57.355652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.819 [2024-11-20 16:19:57.355669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.819 [2024-11-20 16:19:57.355683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.819 [2024-11-20 16:19:57.355699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.819 [2024-11-20 16:19:57.355714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.819 [2024-11-20 16:19:57.355729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.819 [2024-11-20 16:19:57.355743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.355750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2747af0 is same with the state(6) to be set 00:26:56.819 [2024-11-20 16:19:57.355759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.819 [2024-11-20 16:19:57.355765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.819 [2024-11-20 16:19:57.355771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105856 len:8 PRP1 0x0 PRP2 0x0 00:26:56.819 [2024-11-20 16:19:57.355779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.819 [2024-11-20 16:19:57.358754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.819 [2024-11-20 16:19:57.358808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.819 [2024-11-20 16:19:57.359415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.819 [2024-11-20 16:19:57.359432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.819 [2024-11-20 16:19:57.359440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.819 [2024-11-20 16:19:57.359619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.819 [2024-11-20 16:19:57.359797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.819 [2024-11-20 16:19:57.359806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.819 [2024-11-20 16:19:57.359813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.819 [2024-11-20 16:19:57.359821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.819 [2024-11-20 16:19:57.371978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.819 [2024-11-20 16:19:57.372463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.819 [2024-11-20 16:19:57.372511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.819 [2024-11-20 16:19:57.372534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.819 [2024-11-20 16:19:57.373100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.819 [2024-11-20 16:19:57.373275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.819 [2024-11-20 16:19:57.373287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.819 [2024-11-20 16:19:57.373294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.819 [2024-11-20 16:19:57.373301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.819 [2024-11-20 16:19:57.384884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.819 [2024-11-20 16:19:57.385353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.819 [2024-11-20 16:19:57.385399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.819 [2024-11-20 16:19:57.385422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.819 [2024-11-20 16:19:57.385964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.819 [2024-11-20 16:19:57.386138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.819 [2024-11-20 16:19:57.386147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.819 [2024-11-20 16:19:57.386153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.819 [2024-11-20 16:19:57.386160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.819 [2024-11-20 16:19:57.399789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.819 [2024-11-20 16:19:57.400337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.819 [2024-11-20 16:19:57.400383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.819 [2024-11-20 16:19:57.400406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.819 [2024-11-20 16:19:57.401001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.819 [2024-11-20 16:19:57.401496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.819 [2024-11-20 16:19:57.401507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.819 [2024-11-20 16:19:57.401517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.819 [2024-11-20 16:19:57.401526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.412696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.413130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.413176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.413200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.413780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.414034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.414044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.414050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.820 [2024-11-20 16:19:57.414061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.425697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.426034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.426051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.426059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.426244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.426409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.426418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.426424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.820 [2024-11-20 16:19:57.426429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.438657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.439103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.439121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.439128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.439304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.439467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.439475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.439481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.820 [2024-11-20 16:19:57.439488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.451695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.452145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.452162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.452169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.452332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.452494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.452502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.452508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.820 [2024-11-20 16:19:57.452514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.464727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.465192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.465238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.465262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.465843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.466356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.466364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.466371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.820 [2024-11-20 16:19:57.466377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.477659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.478057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.478074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.478082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.478258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.478421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.478429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.478435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.820 [2024-11-20 16:19:57.478441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.820 [2024-11-20 16:19:57.490606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.820 [2024-11-20 16:19:57.491028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.820 [2024-11-20 16:19:57.491046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.820 [2024-11-20 16:19:57.491053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.820 [2024-11-20 16:19:57.491231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.820 [2024-11-20 16:19:57.491394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.820 [2024-11-20 16:19:57.491402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.820 [2024-11-20 16:19:57.491408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.491414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.503686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.504049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.504067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.504074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.504256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.504434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.504442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.504449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.504455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.516668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.517087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.517105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.517112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.517285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.517459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.517468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.517474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.517480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.529507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.529932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.529953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.529960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.530148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.530320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.530328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.530334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.530340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.542446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.542867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.542883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.542890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.543079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.543253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.543264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.543271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.543277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.555380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.555771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.555816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.555839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.556433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.556831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.556839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.556845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.556851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.568291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.568654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.568699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.568722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.569317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.569780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.569789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.569795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.569801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.581161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.581523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.581540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.581547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.581720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.581894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.581902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.581909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.581919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.594109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.594488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.594505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.821 [2024-11-20 16:19:57.594512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.821 [2024-11-20 16:19:57.594685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.821 [2024-11-20 16:19:57.594857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.821 [2024-11-20 16:19:57.594866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.821 [2024-11-20 16:19:57.594872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.821 [2024-11-20 16:19:57.594878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.821 [2024-11-20 16:19:57.607059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.821 [2024-11-20 16:19:57.607503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.821 [2024-11-20 16:19:57.607520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.822 [2024-11-20 16:19:57.607527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.822 [2024-11-20 16:19:57.607699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.822 [2024-11-20 16:19:57.607871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.822 [2024-11-20 16:19:57.607879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.822 [2024-11-20 16:19:57.607885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.822 [2024-11-20 16:19:57.607891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.822 [2024-11-20 16:19:57.620165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.822 [2024-11-20 16:19:57.620611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.822 [2024-11-20 16:19:57.620656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.822 [2024-11-20 16:19:57.620678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.822 [2024-11-20 16:19:57.621271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.822 [2024-11-20 16:19:57.621821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.822 [2024-11-20 16:19:57.621830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.822 [2024-11-20 16:19:57.621837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.822 [2024-11-20 16:19:57.621844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.822 [2024-11-20 16:19:57.633276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.822 [2024-11-20 16:19:57.633718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.822 [2024-11-20 16:19:57.633734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:56.822 [2024-11-20 16:19:57.633742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:56.822 [2024-11-20 16:19:57.633919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:56.822 [2024-11-20 16:19:57.634107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.822 [2024-11-20 16:19:57.634116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.822 [2024-11-20 16:19:57.634122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.822 [2024-11-20 16:19:57.634129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.082 [2024-11-20 16:19:57.646382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.082 [2024-11-20 16:19:57.646792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.082 [2024-11-20 16:19:57.646808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.082 [2024-11-20 16:19:57.646816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.646993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.647166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.647174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.647180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.647186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.659433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.659886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.659940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.659979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.660560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.660822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.660831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.660837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.660843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 9867.33 IOPS, 38.54 MiB/s [2024-11-20T15:19:57.920Z] [2024-11-20 16:19:57.673517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.673958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.673990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.673998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.674174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.674347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.674356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.674362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.674369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.686429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.686767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.686783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.686790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.686958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.687145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.687153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.687159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.687165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.699226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.699657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.699674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.699681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.699853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.700033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.700042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.700048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.700054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.712055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.712475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.712491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.712497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.712660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.712824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.712834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.712841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.712846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.724859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.725315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.725332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.725339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.725512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.725685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.725693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.725699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.725705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.737803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.738160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.738176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.738184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.738356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.738532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.738540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.738546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.738552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.750680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.751096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.751114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.751121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.751285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.751448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.751456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.751463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.751472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.083 [2024-11-20 16:19:57.763513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.083 [2024-11-20 16:19:57.763891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.083 [2024-11-20 16:19:57.763907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.083 [2024-11-20 16:19:57.763914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.083 [2024-11-20 16:19:57.764104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.083 [2024-11-20 16:19:57.764284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.083 [2024-11-20 16:19:57.764293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.083 [2024-11-20 16:19:57.764299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.083 [2024-11-20 16:19:57.764305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.776404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.776838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.776854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.776861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.777048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.777222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.777231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.777237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.777243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.789244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.789693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.789709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.789716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.789888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.790087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.790096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.790103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.790109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.802146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.802571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.802587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.802595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.802790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.802959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.802968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.802974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.802997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.815038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.815458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.815475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.815481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.815644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.815807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.815815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.815820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.815826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.827981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.828328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.828345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.828352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.828514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.828677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.828685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.828691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.828697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.840817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.841245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.841289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.841312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.841897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.842373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.842382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.842388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.842394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.853681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.854108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.854155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.854178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.854758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.855329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.855337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.855343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.855349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.866495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.866930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.866952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.866960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.867151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.867330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.867338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.867345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.867351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.879654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.880089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.880107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.880114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.880292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.880471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.880485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.084 [2024-11-20 16:19:57.880493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.084 [2024-11-20 16:19:57.880500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.084 [2024-11-20 16:19:57.892541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.084 [2024-11-20 16:19:57.892959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.084 [2024-11-20 16:19:57.892975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.084 [2024-11-20 16:19:57.892982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.084 [2024-11-20 16:19:57.893145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.084 [2024-11-20 16:19:57.893309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.084 [2024-11-20 16:19:57.893317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.085 [2024-11-20 16:19:57.893323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.085 [2024-11-20 16:19:57.893329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.085 [2024-11-20 16:19:57.905387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.085 [2024-11-20 16:19:57.905824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.085 [2024-11-20 16:19:57.905869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.085 [2024-11-20 16:19:57.905892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.085 [2024-11-20 16:19:57.906376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.085 [2024-11-20 16:19:57.906550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.085 [2024-11-20 16:19:57.906558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.085 [2024-11-20 16:19:57.906564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.085 [2024-11-20 16:19:57.906571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.347 [2024-11-20 16:19:57.918459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.347 [2024-11-20 16:19:57.918913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.347 [2024-11-20 16:19:57.918931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.347 [2024-11-20 16:19:57.918938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.347 [2024-11-20 16:19:57.919115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.347 [2024-11-20 16:19:57.919289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.347 [2024-11-20 16:19:57.919298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.347 [2024-11-20 16:19:57.919304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.347 [2024-11-20 16:19:57.919314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.347 [2024-11-20 16:19:57.931320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.347 [2024-11-20 16:19:57.931752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.347 [2024-11-20 16:19:57.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.347 [2024-11-20 16:19:57.931821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.347 [2024-11-20 16:19:57.932366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.347 [2024-11-20 16:19:57.932756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.347 [2024-11-20 16:19:57.932773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.347 [2024-11-20 16:19:57.932787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.347 [2024-11-20 16:19:57.932800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:57.946154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:57.946681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:57.946704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:57.946714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:57.946976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:57.947231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:57.947243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:57.947252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:57.947260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:57.959241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:57.959676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:57.959716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:57.959741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:57.960290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:57.960464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:57.960472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:57.960479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:57.960485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:57.972103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:57.972564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:57.972608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:57.972631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:57.973227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:57.973453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:57.973461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:57.973467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:57.973474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:57.985003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:57.985391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:57.985408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:57.985415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:57.985578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:57.985740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:57.985748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:57.985754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:57.985759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:57.997878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:57.998331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:57.998376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:57.998398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:57.998855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:57.999076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:57.999085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:57.999091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:57.999098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:58.010834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:58.011266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:58.011283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:58.011290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:58.011467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:58.011640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:58.011648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:58.011655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:58.011661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:58.023658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:58.024071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:58.024088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:58.024095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:58.024276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:58.024439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:58.024447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:58.024453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:58.024459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:58.036578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:58.036987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:58.037005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:58.037013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:58.037185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:58.037360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:58.037369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:58.037375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:58.037381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:58.049417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:58.049826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:58.049872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:58.049896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:58.050490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:58.050736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:58.050747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:58.050753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.348 [2024-11-20 16:19:58.050759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.348 [2024-11-20 16:19:58.062288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.348 [2024-11-20 16:19:58.062706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.348 [2024-11-20 16:19:58.062723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.348 [2024-11-20 16:19:58.062730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.348 [2024-11-20 16:19:58.062903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.348 [2024-11-20 16:19:58.063102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.348 [2024-11-20 16:19:58.063110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.348 [2024-11-20 16:19:58.063117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.063123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.075154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.075548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.075564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.075572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.075744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.075916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.075924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.075931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.075937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.088059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.088487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.088504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.088511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.088683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.088856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.088864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.088871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.088880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.100935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.101362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.101378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.101385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.101548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.101710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.101718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.101724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.101730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.113775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.114196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.114213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.114220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.114392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.114570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.114578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.114584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.114590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.126630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.127053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.127071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.127078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.127256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.127434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.127443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.127450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.127456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.139798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.140216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.140233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.140241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.140419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.140597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.140607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.140614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.140621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.152767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.153165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.153211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.153234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.153787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.153965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.153973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.153980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.153986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.165832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.166230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.166247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.166254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.349 [2024-11-20 16:19:58.166426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.349 [2024-11-20 16:19:58.166598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.349 [2024-11-20 16:19:58.166607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.349 [2024-11-20 16:19:58.166613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.349 [2024-11-20 16:19:58.166619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.349 [2024-11-20 16:19:58.178819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.349 [2024-11-20 16:19:58.179250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.349 [2024-11-20 16:19:58.179267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.349 [2024-11-20 16:19:58.179275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.610 [2024-11-20 16:19:58.179455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.610 [2024-11-20 16:19:58.179632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.610 [2024-11-20 16:19:58.179641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.610 [2024-11-20 16:19:58.179647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.610 [2024-11-20 16:19:58.179653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.610 [2024-11-20 16:19:58.191757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.610 [2024-11-20 16:19:58.192147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-11-20 16:19:58.192165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.610 [2024-11-20 16:19:58.192172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.610 [2024-11-20 16:19:58.192344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.610 [2024-11-20 16:19:58.192517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.610 [2024-11-20 16:19:58.192526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.610 [2024-11-20 16:19:58.192532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.610 [2024-11-20 16:19:58.192538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.610 [2024-11-20 16:19:58.204747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.610 [2024-11-20 16:19:58.205171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-11-20 16:19:58.205189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.610 [2024-11-20 16:19:58.205196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.610 [2024-11-20 16:19:58.205368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.610 [2024-11-20 16:19:58.205540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.610 [2024-11-20 16:19:58.205549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.610 [2024-11-20 16:19:58.205555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.610 [2024-11-20 16:19:58.205561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.610 [2024-11-20 16:19:58.217563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.610 [2024-11-20 16:19:58.217979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-11-20 16:19:58.217997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.610 [2024-11-20 16:19:58.218004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.610 [2024-11-20 16:19:58.218176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.610 [2024-11-20 16:19:58.218348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.610 [2024-11-20 16:19:58.218360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.610 [2024-11-20 16:19:58.218366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.610 [2024-11-20 16:19:58.218372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.610 [2024-11-20 16:19:58.230439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.610 [2024-11-20 16:19:58.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-11-20 16:19:58.230775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.610 [2024-11-20 16:19:58.230782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.610 [2024-11-20 16:19:58.230945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.610 [2024-11-20 16:19:58.231139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.610 [2024-11-20 16:19:58.231147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.610 [2024-11-20 16:19:58.231153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.610 [2024-11-20 16:19:58.231159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.610 [2024-11-20 16:19:58.243228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.610 [2024-11-20 16:19:58.243624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-11-20 16:19:58.243641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.610 [2024-11-20 16:19:58.243649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.610 [2024-11-20 16:19:58.243821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.610 [2024-11-20 16:19:58.244001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.610 [2024-11-20 16:19:58.244010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.610 [2024-11-20 16:19:58.244016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.244022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.256053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.256446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.256462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.256469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.256632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.256795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.256803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.256809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.256818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.268982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.269400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.269417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.269424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.269596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.269768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.269777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.269783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.269789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.281824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.282244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.282268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.282440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.282612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.282620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.282627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.282633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.294845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.295274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.295291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.295298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.295471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.295648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.295657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.295663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.295669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.307859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.308244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.308261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.308268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.308440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.308613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.308621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.308627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.308633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.320887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.321322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.321339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.321346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.321519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.321692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.321700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.321707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.321713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.333901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.334259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.334277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.334284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.334455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.334627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.334636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.334642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.334648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.346875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.347371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.347394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.347998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.348490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.348500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.348507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.348515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.359988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.360348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.360365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.360372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.360549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.360727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.360736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.611 [2024-11-20 16:19:58.360742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.611 [2024-11-20 16:19:58.360748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.611 [2024-11-20 16:19:58.372978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.611 [2024-11-20 16:19:58.373265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-11-20 16:19:58.373282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.611 [2024-11-20 16:19:58.373289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.611 [2024-11-20 16:19:58.373462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.611 [2024-11-20 16:19:58.373635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.611 [2024-11-20 16:19:58.373644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.612 [2024-11-20 16:19:58.373650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.612 [2024-11-20 16:19:58.373656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.612 [2024-11-20 16:19:58.385985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.612 [2024-11-20 16:19:58.386373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-11-20 16:19:58.386390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.612 [2024-11-20 16:19:58.386398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.612 [2024-11-20 16:19:58.386576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.612 [2024-11-20 16:19:58.386753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.612 [2024-11-20 16:19:58.386765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.612 [2024-11-20 16:19:58.386772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.612 [2024-11-20 16:19:58.386778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.612 [2024-11-20 16:19:58.399134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.612 [2024-11-20 16:19:58.399489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-11-20 16:19:58.399506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.612 [2024-11-20 16:19:58.399513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.612 [2024-11-20 16:19:58.399691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.612 [2024-11-20 16:19:58.399870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.612 [2024-11-20 16:19:58.399879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.612 [2024-11-20 16:19:58.399886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.612 [2024-11-20 16:19:58.399892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.612 [2024-11-20 16:19:58.412120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.612 [2024-11-20 16:19:58.412518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-11-20 16:19:58.412535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.612 [2024-11-20 16:19:58.412543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.612 [2024-11-20 16:19:58.412715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.612 [2024-11-20 16:19:58.412888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.612 [2024-11-20 16:19:58.412896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.612 [2024-11-20 16:19:58.412902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.612 [2024-11-20 16:19:58.412909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.612 [2024-11-20 16:19:58.425111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.612 [2024-11-20 16:19:58.425525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-11-20 16:19:58.425569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.612 [2024-11-20 16:19:58.425592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.612 [2024-11-20 16:19:58.426186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.612 [2024-11-20 16:19:58.426695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.612 [2024-11-20 16:19:58.426704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.612 [2024-11-20 16:19:58.426710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.612 [2024-11-20 16:19:58.426719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.612 [2024-11-20 16:19:58.438054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.612 [2024-11-20 16:19:58.438461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-11-20 16:19:58.438477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.612 [2024-11-20 16:19:58.438485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.612 [2024-11-20 16:19:58.438675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.612 [2024-11-20 16:19:58.438854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.612 [2024-11-20 16:19:58.438863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.612 [2024-11-20 16:19:58.438869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.612 [2024-11-20 16:19:58.438875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.872 [2024-11-20 16:19:58.451151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.872 [2024-11-20 16:19:58.451556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.872 [2024-11-20 16:19:58.451599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.872 [2024-11-20 16:19:58.451622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.872 [2024-11-20 16:19:58.452216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.872 [2024-11-20 16:19:58.452699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.872 [2024-11-20 16:19:58.452707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.872 [2024-11-20 16:19:58.452713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.872 [2024-11-20 16:19:58.452719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.872 [2024-11-20 16:19:58.464068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.872 [2024-11-20 16:19:58.464414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.872 [2024-11-20 16:19:58.464431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.872 [2024-11-20 16:19:58.464438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.872 [2024-11-20 16:19:58.464610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.872 [2024-11-20 16:19:58.464781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.872 [2024-11-20 16:19:58.464789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.872 [2024-11-20 16:19:58.464795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.872 [2024-11-20 16:19:58.464801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.872 [2024-11-20 16:19:58.477056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.872 [2024-11-20 16:19:58.477391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.872 [2024-11-20 16:19:58.477407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.872 [2024-11-20 16:19:58.477415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.872 [2024-11-20 16:19:58.477587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.872 [2024-11-20 16:19:58.477760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.872 [2024-11-20 16:19:58.477770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.872 [2024-11-20 16:19:58.477776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.872 [2024-11-20 16:19:58.477782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.872 [2024-11-20 16:19:58.489988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.872 [2024-11-20 16:19:58.490326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.872 [2024-11-20 16:19:58.490343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.872 [2024-11-20 16:19:58.490351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.872 [2024-11-20 16:19:58.490522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.872 [2024-11-20 16:19:58.490696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.872 [2024-11-20 16:19:58.490705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.872 [2024-11-20 16:19:58.490711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.872 [2024-11-20 16:19:58.490717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.872 [2024-11-20 16:19:58.503017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.872 [2024-11-20 16:19:58.503321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.872 [2024-11-20 16:19:58.503339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.503346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.503524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.503703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.503711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.503718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.503724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.516035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.516338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.516382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.516405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.516959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.517148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.517157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.517164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.517170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.528924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.529319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.529337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.529344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.529516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.529688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.529697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.529703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.529709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.541802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.542196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.542225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.542233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.542405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.542578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.542586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.542593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.542599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.554817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.555171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.555188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.555195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.555366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.555542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.555554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.555560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.555566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.567802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.568187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.568232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.568255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.568834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.569254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.569273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.569287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.569300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.582982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.583359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.583381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.583391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.583645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.583901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.583913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.583922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.583931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.596020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.596309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.596326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.596333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.596506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.596680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.596688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.596695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.596705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.608908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.609212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.609229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.609237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.609409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.609582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.609591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.609597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.609603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.621794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.622085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.622102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.873 [2024-11-20 16:19:58.622109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.873 [2024-11-20 16:19:58.622280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.873 [2024-11-20 16:19:58.622453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.873 [2024-11-20 16:19:58.622461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.873 [2024-11-20 16:19:58.622467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.873 [2024-11-20 16:19:58.622474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.873 [2024-11-20 16:19:58.634723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.873 [2024-11-20 16:19:58.635119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.873 [2024-11-20 16:19:58.635175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.874 [2024-11-20 16:19:58.635198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.874 [2024-11-20 16:19:58.635777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.874 [2024-11-20 16:19:58.636353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.874 [2024-11-20 16:19:58.636362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.874 [2024-11-20 16:19:58.636368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.874 [2024-11-20 16:19:58.636375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.874 [2024-11-20 16:19:58.647816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.874 [2024-11-20 16:19:58.648194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.874 [2024-11-20 16:19:58.648211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.874 [2024-11-20 16:19:58.648219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.874 [2024-11-20 16:19:58.648397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.874 [2024-11-20 16:19:58.648576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.874 [2024-11-20 16:19:58.648585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.874 [2024-11-20 16:19:58.648592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.874 [2024-11-20 16:19:58.648598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.874 [2024-11-20 16:19:58.660864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.874 [2024-11-20 16:19:58.661228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.874 [2024-11-20 16:19:58.661245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.874 [2024-11-20 16:19:58.661252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.874 [2024-11-20 16:19:58.661425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.874 [2024-11-20 16:19:58.661599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.874 [2024-11-20 16:19:58.661608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.874 [2024-11-20 16:19:58.661614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.874 [2024-11-20 16:19:58.661620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.874 7400.50 IOPS, 28.91 MiB/s [2024-11-20T15:19:58.711Z] [2024-11-20 16:19:58.675068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.874 [2024-11-20 16:19:58.675412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.874 [2024-11-20 16:19:58.675429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.874 [2024-11-20 16:19:58.675437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.874 [2024-11-20 16:19:58.675615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.874 [2024-11-20 16:19:58.675793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.874 [2024-11-20 16:19:58.675801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.874 [2024-11-20 16:19:58.675808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.874 [2024-11-20 16:19:58.675814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.874 [2024-11-20 16:19:58.687976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.874 [2024-11-20 16:19:58.688397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.874 [2024-11-20 16:19:58.688414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.874 [2024-11-20 16:19:58.688421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.874 [2024-11-20 16:19:58.688596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.874 [2024-11-20 16:19:58.688772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.874 [2024-11-20 16:19:58.688781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.874 [2024-11-20 16:19:58.688787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.874 [2024-11-20 16:19:58.688793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:57.874 [2024-11-20 16:19:58.700931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:57.874 [2024-11-20 16:19:58.701313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.874 [2024-11-20 16:19:58.701330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:57.874 [2024-11-20 16:19:58.701337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:57.874 [2024-11-20 16:19:58.701514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:57.874 [2024-11-20 16:19:58.701692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:57.874 [2024-11-20 16:19:58.701701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:57.874 [2024-11-20 16:19:58.701707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:57.874 [2024-11-20 16:19:58.701713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.134 [2024-11-20 16:19:58.713877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.134 [2024-11-20 16:19:58.714185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.134 [2024-11-20 16:19:58.714202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.134 [2024-11-20 16:19:58.714209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.134 [2024-11-20 16:19:58.714381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.134 [2024-11-20 16:19:58.714553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.134 [2024-11-20 16:19:58.714562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.134 [2024-11-20 16:19:58.714568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.134 [2024-11-20 16:19:58.714574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.134 [2024-11-20 16:19:58.726775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.134 [2024-11-20 16:19:58.727137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.134 [2024-11-20 16:19:58.727154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.134 [2024-11-20 16:19:58.727161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.134 [2024-11-20 16:19:58.727334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.134 [2024-11-20 16:19:58.727507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.134 [2024-11-20 16:19:58.727519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.134 [2024-11-20 16:19:58.727525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.134 [2024-11-20 16:19:58.727531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.134 [2024-11-20 16:19:58.739830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.134 [2024-11-20 16:19:58.740285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.134 [2024-11-20 16:19:58.740331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.134 [2024-11-20 16:19:58.740354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.134 [2024-11-20 16:19:58.740932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.134 [2024-11-20 16:19:58.741531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.134 [2024-11-20 16:19:58.741556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.134 [2024-11-20 16:19:58.741563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.134 [2024-11-20 16:19:58.741569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.134 [2024-11-20 16:19:58.752756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.134 [2024-11-20 16:19:58.753194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.134 [2024-11-20 16:19:58.753211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.134 [2024-11-20 16:19:58.753218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.134 [2024-11-20 16:19:58.753390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.134 [2024-11-20 16:19:58.753564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.134 [2024-11-20 16:19:58.753573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.134 [2024-11-20 16:19:58.753579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.134 [2024-11-20 16:19:58.753585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.765597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.766014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.766031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.766039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.766211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.766385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.766393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.766399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.766409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.778519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.778913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.778929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.778936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.779128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.779301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.779309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.779315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.779321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.791447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.791866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.791882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.791890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.792067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.792240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.792248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.792254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.792260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.804395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.804816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.804833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.804840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.805018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.805191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.805200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.805206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.805212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.817260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.817694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.817735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.817759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.818353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.818595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.818603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.818609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.818615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.830088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.830538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.830562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.831154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.831740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.831756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.831770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.831784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.845275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.845780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.845825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.845848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.846441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.846965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.846977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.846986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.846996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.858287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.858662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.858679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.858686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.858856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.859052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.859061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.135 [2024-11-20 16:19:58.859067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.135 [2024-11-20 16:19:58.859073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.135 [2024-11-20 16:19:58.871336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.135 [2024-11-20 16:19:58.871736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.135 [2024-11-20 16:19:58.871753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.135 [2024-11-20 16:19:58.871760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.135 [2024-11-20 16:19:58.871937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.135 [2024-11-20 16:19:58.872120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.135 [2024-11-20 16:19:58.872130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.872136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.872142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.884340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.884734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.884774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.884798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.885392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.885926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.885934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.885940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.885949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.897176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.897598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.897615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.897622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.897795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.897972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.898001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.898009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.898016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.910383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.910841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.910885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.910908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.911504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.912091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.912100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.912106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.912112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.923460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.923875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.923893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.923900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.924100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.924279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.924288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.924295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.924301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.936435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.936853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.936869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.936876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.937055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.937228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.937237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.937243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.937252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.949286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.949703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.949719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.949726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.949898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.950077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.950086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.950092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.950098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.136 [2024-11-20 16:19:58.962092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.136 [2024-11-20 16:19:58.962489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.136 [2024-11-20 16:19:58.962506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.136 [2024-11-20 16:19:58.962513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.136 [2024-11-20 16:19:58.962685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.136 [2024-11-20 16:19:58.962858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.136 [2024-11-20 16:19:58.962866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.136 [2024-11-20 16:19:58.962872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.136 [2024-11-20 16:19:58.962878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.397 [2024-11-20 16:19:58.975122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.397 [2024-11-20 16:19:58.975537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.397 [2024-11-20 16:19:58.975582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.397 [2024-11-20 16:19:58.975605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.397 [2024-11-20 16:19:58.976093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.397 [2024-11-20 16:19:58.976267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.397 [2024-11-20 16:19:58.976276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.397 [2024-11-20 16:19:58.976282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.397 [2024-11-20 16:19:58.976289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.397 [2024-11-20 16:19:58.987914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.397 [2024-11-20 16:19:58.988311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.397 [2024-11-20 16:19:58.988327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.397 [2024-11-20 16:19:58.988334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.397 [2024-11-20 16:19:58.988497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.397 [2024-11-20 16:19:58.988660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.397 [2024-11-20 16:19:58.988668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.397 [2024-11-20 16:19:58.988674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.397 [2024-11-20 16:19:58.988680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.397 [2024-11-20 16:19:59.000715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.397 [2024-11-20 16:19:59.001125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.397 [2024-11-20 16:19:59.001172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.397 [2024-11-20 16:19:59.001195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.397 [2024-11-20 16:19:59.001776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.397 [2024-11-20 16:19:59.001968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.397 [2024-11-20 16:19:59.001977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.397 [2024-11-20 16:19:59.001983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.397 [2024-11-20 16:19:59.001989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.397 [2024-11-20 16:19:59.013653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.397 [2024-11-20 16:19:59.014083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.397 [2024-11-20 16:19:59.014101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.397 [2024-11-20 16:19:59.014108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.397 [2024-11-20 16:19:59.014287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.397 [2024-11-20 16:19:59.014465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.397 [2024-11-20 16:19:59.014475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.397 [2024-11-20 16:19:59.014482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.397 [2024-11-20 16:19:59.014488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.397 [2024-11-20 16:19:59.026622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.397 [2024-11-20 16:19:59.027079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.397 [2024-11-20 16:19:59.027125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.397 [2024-11-20 16:19:59.027149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.397 [2024-11-20 16:19:59.027737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.397 [2024-11-20 16:19:59.028007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.028017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.028024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.028030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.039563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.039977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.039994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.040001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.040174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.040346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.040356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.040363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.040369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.052406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.052854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.052899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.052923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.053348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.053523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.053531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.053537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.053543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.065292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.065749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.065794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.065817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.066318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.066492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.066504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.066510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.066516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.078220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.078644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.078661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.078668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.078840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.079017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.079026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.079033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.079039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.091108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.091518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.091534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.091541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.091704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.091867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.091875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.091881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.091887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.104009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.104454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.104499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.104521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.105115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.105398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.105407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.105413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.105422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.119035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.119547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.119591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.119615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.120211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.120662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.120673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.120682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.120691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.131956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.132417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.132461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.132485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.133078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.133591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.133599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.133606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.133612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.144792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.145155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.145172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.145179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.145351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.398 [2024-11-20 16:19:59.145524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.398 [2024-11-20 16:19:59.145533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.398 [2024-11-20 16:19:59.145539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.398 [2024-11-20 16:19:59.145545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.398 [2024-11-20 16:19:59.157609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.398 [2024-11-20 16:19:59.158028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.398 [2024-11-20 16:19:59.158045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.398 [2024-11-20 16:19:59.158053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.398 [2024-11-20 16:19:59.158226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.399 [2024-11-20 16:19:59.158399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.399 [2024-11-20 16:19:59.158408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.399 [2024-11-20 16:19:59.158415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.399 [2024-11-20 16:19:59.158422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.399 [2024-11-20 16:19:59.170782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.399 [2024-11-20 16:19:59.171222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.399 [2024-11-20 16:19:59.171239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.399 [2024-11-20 16:19:59.171247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.399 [2024-11-20 16:19:59.171424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.399 [2024-11-20 16:19:59.171602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.399 [2024-11-20 16:19:59.171611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.399 [2024-11-20 16:19:59.171617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.399 [2024-11-20 16:19:59.171623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.399 [2024-11-20 16:19:59.183821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.399 [2024-11-20 16:19:59.184283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.399 [2024-11-20 16:19:59.184328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.399 [2024-11-20 16:19:59.184352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.399 [2024-11-20 16:19:59.184932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.399 [2024-11-20 16:19:59.185529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.399 [2024-11-20 16:19:59.185555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.399 [2024-11-20 16:19:59.185576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.399 [2024-11-20 16:19:59.185595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.399 [2024-11-20 16:19:59.196635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.399 [2024-11-20 16:19:59.197052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.399 [2024-11-20 16:19:59.197069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.399 [2024-11-20 16:19:59.197076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.399 [2024-11-20 16:19:59.197252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.399 [2024-11-20 16:19:59.197425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.399 [2024-11-20 16:19:59.197434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.399 [2024-11-20 16:19:59.197440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.399 [2024-11-20 16:19:59.197446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.399 [2024-11-20 16:19:59.209554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.399 [2024-11-20 16:19:59.209956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.399 [2024-11-20 16:19:59.209972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.399 [2024-11-20 16:19:59.209995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.399 [2024-11-20 16:19:59.210168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.399 [2024-11-20 16:19:59.210342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.399 [2024-11-20 16:19:59.210350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.399 [2024-11-20 16:19:59.210356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.399 [2024-11-20 16:19:59.210363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.399 [2024-11-20 16:19:59.222478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.399 [2024-11-20 16:19:59.222957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.399 [2024-11-20 16:19:59.223003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.399 [2024-11-20 16:19:59.223026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.399 [2024-11-20 16:19:59.223472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.399 [2024-11-20 16:19:59.223646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.399 [2024-11-20 16:19:59.223654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.399 [2024-11-20 16:19:59.223660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.399 [2024-11-20 16:19:59.223666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.235442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.235783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.235800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.235807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.235994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.236174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.236185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.236192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.236199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.248440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.248829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.248875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.248898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.249492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.249903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.249912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.249918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.249924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.261255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.261680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.261695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.261702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.261865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.262053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.262062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.262068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.262075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.274087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.274485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.274531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.274555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.275077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.275251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.275259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.275265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.275274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.286882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.287320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.287337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.287344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.287516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.287689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.287697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.287703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.287710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.299725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.300118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.300135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.300142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.300305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.300468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.300476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.300482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.300488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.312530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.312957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.312974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.312981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.313143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.313305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.313313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.313319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.313324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.325350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.325771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.325786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.325793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.325961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.326169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.660 [2024-11-20 16:19:59.326183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.660 [2024-11-20 16:19:59.326190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.660 [2024-11-20 16:19:59.326196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.660 [2024-11-20 16:19:59.338265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.660 [2024-11-20 16:19:59.338679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.660 [2024-11-20 16:19:59.338696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.660 [2024-11-20 16:19:59.338703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.660 [2024-11-20 16:19:59.338875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.660 [2024-11-20 16:19:59.339072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.339081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.339087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.339094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.351104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.351449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.351493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.351516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.352023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.352196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.352205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.352211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.352217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.364020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.364462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.364477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.364484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.364650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.364813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.364821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.364827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.364833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.376889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.377321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.377338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.377345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.377517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.377691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.377699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.377706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.377712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.389821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.390191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.390208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.390215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.390387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.390561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.390569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.390575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.390582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.402714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.403166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.403183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.403191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.403354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.403517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.403528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.403534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.403540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.415609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.416052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.416070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.416077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.416254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.416432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.416441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.416448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.416455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.428720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.429153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.429171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.429179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.429356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.429536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.429545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.429552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.429559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.441791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.442169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.442185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.442193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.442365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.442540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.442548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.442554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.442564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.454633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.454965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.455010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.661 [2024-11-20 16:19:59.455034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.661 [2024-11-20 16:19:59.455614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.661 [2024-11-20 16:19:59.456213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.661 [2024-11-20 16:19:59.456236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.661 [2024-11-20 16:19:59.456243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.661 [2024-11-20 16:19:59.456250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.661 [2024-11-20 16:19:59.467570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.661 [2024-11-20 16:19:59.467919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.661 [2024-11-20 16:19:59.467935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.662 [2024-11-20 16:19:59.467942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.662 [2024-11-20 16:19:59.468134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.662 [2024-11-20 16:19:59.468308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.662 [2024-11-20 16:19:59.468317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.662 [2024-11-20 16:19:59.468323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.662 [2024-11-20 16:19:59.468329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.662 [2024-11-20 16:19:59.480359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.662 [2024-11-20 16:19:59.480785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.662 [2024-11-20 16:19:59.480830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.662 [2024-11-20 16:19:59.480852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.662 [2024-11-20 16:19:59.481329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.662 [2024-11-20 16:19:59.481502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.662 [2024-11-20 16:19:59.481511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.662 [2024-11-20 16:19:59.481517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.662 [2024-11-20 16:19:59.481524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.922 [2024-11-20 16:19:59.493425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.922 [2024-11-20 16:19:59.493882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.922 [2024-11-20 16:19:59.493900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.922 [2024-11-20 16:19:59.493907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.922 [2024-11-20 16:19:59.494114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.922 [2024-11-20 16:19:59.494293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.922 [2024-11-20 16:19:59.494302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.922 [2024-11-20 16:19:59.494308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.922 [2024-11-20 16:19:59.494315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.922 [2024-11-20 16:19:59.506521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.922 [2024-11-20 16:19:59.506944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.922 [2024-11-20 16:19:59.506965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.922 [2024-11-20 16:19:59.506973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.922 [2024-11-20 16:19:59.507145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.922 [2024-11-20 16:19:59.507319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.922 [2024-11-20 16:19:59.507327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.922 [2024-11-20 16:19:59.507334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.922 [2024-11-20 16:19:59.507340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.922 [2024-11-20 16:19:59.519373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.922 [2024-11-20 16:19:59.519801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.922 [2024-11-20 16:19:59.519817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.922 [2024-11-20 16:19:59.519824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.922 [2024-11-20 16:19:59.520010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.922 [2024-11-20 16:19:59.520188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.922 [2024-11-20 16:19:59.520196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.922 [2024-11-20 16:19:59.520202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.922 [2024-11-20 16:19:59.520209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.922 [2024-11-20 16:19:59.532270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.922 [2024-11-20 16:19:59.532583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.922 [2024-11-20 16:19:59.532599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.922 [2024-11-20 16:19:59.532606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.922 [2024-11-20 16:19:59.532772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.922 [2024-11-20 16:19:59.532939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.922 [2024-11-20 16:19:59.532952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.922 [2024-11-20 16:19:59.532959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.922 [2024-11-20 16:19:59.532965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.922 [2024-11-20 16:19:59.545099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.922 [2024-11-20 16:19:59.545539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.922 [2024-11-20 16:19:59.545584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.922 [2024-11-20 16:19:59.545607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.922 [2024-11-20 16:19:59.546203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.922 [2024-11-20 16:19:59.546394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.922 [2024-11-20 16:19:59.546402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.922 [2024-11-20 16:19:59.546409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.922 [2024-11-20 16:19:59.546415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.922 [2024-11-20 16:19:59.558068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.922 [2024-11-20 16:19:59.558440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.558486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.558510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.559102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.559551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.559560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.559566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.559572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.571082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.571497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.571541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.571564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.572161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.572646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.572658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.572665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.572671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.584037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.584411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.584429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.584436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.584609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.584782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.584790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.584797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.584803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.597035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.597327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.597343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.597350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.597523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.597696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.597705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.597711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.597717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.610050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.610446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.610463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.610470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.610641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.610815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.610823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.610829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.610839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.623001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.623352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.623397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.623420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.623884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.624063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.624071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.624078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.624084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.635966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.636382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.636420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.636444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.637036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.637616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.637624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.637630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.637636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.648927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.649278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.649295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.649302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.649474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.649646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.649654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.649661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.649667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 [2024-11-20 16:19:59.661884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.923 [2024-11-20 16:19:59.662339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.923 [2024-11-20 16:19:59.662356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.923 [2024-11-20 16:19:59.662363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.923 [2024-11-20 16:19:59.662536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.923 [2024-11-20 16:19:59.662707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.923 [2024-11-20 16:19:59.662716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.923 [2024-11-20 16:19:59.662722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.923 [2024-11-20 16:19:59.662728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.923 5920.40 IOPS, 23.13 MiB/s [2024-11-20T15:19:59.760Z] [2024-11-20 16:19:59.676084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.676509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.676526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.676533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.676711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.676889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.676898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.676905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.676911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.924 [2024-11-20 16:19:59.689237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.689669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.689677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.689855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.690038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.690048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.690055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.690061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.924 [2024-11-20 16:19:59.702037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.702456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.702473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.702484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.702657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.702830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.702838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.702844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.702850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.924 [2024-11-20 16:19:59.714916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.715352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.715397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.715420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.715845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.716032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.716040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.716046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.716053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.924 [2024-11-20 16:19:59.728109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.728561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.728607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.728631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.729228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.729816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.729841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.729861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.729890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.924 [2024-11-20 16:19:59.741195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.741620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.741638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.741645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.741817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.742010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.742023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.742030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.742036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:58.924 [2024-11-20 16:19:59.754296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:58.924 [2024-11-20 16:19:59.754676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.924 [2024-11-20 16:19:59.754692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:58.924 [2024-11-20 16:19:59.754699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:58.924 [2024-11-20 16:19:59.754874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:58.924 [2024-11-20 16:19:59.755075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:58.924 [2024-11-20 16:19:59.755085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:58.924 [2024-11-20 16:19:59.755091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:58.924 [2024-11-20 16:19:59.755098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.184 [2024-11-20 16:19:59.767368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.184 [2024-11-20 16:19:59.767741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-11-20 16:19:59.767758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.184 [2024-11-20 16:19:59.767765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.184 [2024-11-20 16:19:59.767937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.184 [2024-11-20 16:19:59.768115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.184 [2024-11-20 16:19:59.768124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.184 [2024-11-20 16:19:59.768131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.768137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.780284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.780584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.780601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.780608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.780781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.780958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.780967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.780973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.780985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.793425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.793767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.793784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.793791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.793968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.794141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.794149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.794155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.794161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.806525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.806844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.806888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.806911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.807511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.808042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.808052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.808058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.808065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.819483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.819858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.819900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.819923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.820517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.821117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.821126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.821132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.821139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.832395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.832822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.832866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.832889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.833484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.834076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.834103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.834124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.834145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.847413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.847867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.847889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.847900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.848159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.848415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.848427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.848436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.848445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.860524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.860855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.860872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.860879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.861056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.861229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.861237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.861243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.861249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.873498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.873821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.873844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.874025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.874199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.874207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.874213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.874219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.886474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.886812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.886829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.886836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.887013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.185 [2024-11-20 16:19:59.887185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.185 [2024-11-20 16:19:59.887194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.185 [2024-11-20 16:19:59.887200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.185 [2024-11-20 16:19:59.887206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.185 [2024-11-20 16:19:59.899454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.185 [2024-11-20 16:19:59.899831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-11-20 16:19:59.899847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.185 [2024-11-20 16:19:59.899854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.185 [2024-11-20 16:19:59.900032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.900205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.900226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.900232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.900238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.912565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.913008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.913082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.913664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.914272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.914284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.914291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.914297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.925481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.925843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.925861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.925868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.926046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.926220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.926228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.926234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.926240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.938591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.938937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.938960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.938969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.939148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.939326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.939336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.939343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.939350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.951651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.951991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.952008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.952016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.952188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.952362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.952370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.952376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.952386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.964628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.964964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.965005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.965183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.965362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.965371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.965377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.965384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.977608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.977935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.977991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.978014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.978480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.978653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.978661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.978667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.978674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:19:59.990703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:19:59.991057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:19:59.991075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:19:59.991082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:19:59.991261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:19:59.991439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:19:59.991447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:19:59.991455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:19:59.991461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:20:00.004143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:20:00.004564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:20:00.004595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:20:00.004625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:20:00.004991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:20:00.005369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:20:00.005397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:20:00.005407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:20:00.005418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.186 [2024-11-20 16:20:00.017254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.186 [2024-11-20 16:20:00.017606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-11-20 16:20:00.017624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.186 [2024-11-20 16:20:00.017632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.186 [2024-11-20 16:20:00.017811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.186 [2024-11-20 16:20:00.017995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.186 [2024-11-20 16:20:00.018005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.186 [2024-11-20 16:20:00.018012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.186 [2024-11-20 16:20:00.018018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.447 [2024-11-20 16:20:00.030314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.447 [2024-11-20 16:20:00.030709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.447 [2024-11-20 16:20:00.030727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.447 [2024-11-20 16:20:00.030735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.447 [2024-11-20 16:20:00.030913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.447 [2024-11-20 16:20:00.031097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.447 [2024-11-20 16:20:00.031107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.447 [2024-11-20 16:20:00.031114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.447 [2024-11-20 16:20:00.031120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.447 [2024-11-20 16:20:00.043503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.447 [2024-11-20 16:20:00.043796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.447 [2024-11-20 16:20:00.043814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.447 [2024-11-20 16:20:00.043822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.447 [2024-11-20 16:20:00.044011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.447 [2024-11-20 16:20:00.044192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.447 [2024-11-20 16:20:00.044200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.447 [2024-11-20 16:20:00.044207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.447 [2024-11-20 16:20:00.044214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.447 [2024-11-20 16:20:00.057922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.447 [2024-11-20 16:20:00.058223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.447 [2024-11-20 16:20:00.058241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.447 [2024-11-20 16:20:00.058249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.447 [2024-11-20 16:20:00.058427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.447 [2024-11-20 16:20:00.058605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.447 [2024-11-20 16:20:00.058614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.447 [2024-11-20 16:20:00.058620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.447 [2024-11-20 16:20:00.058627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.447 [2024-11-20 16:20:00.070998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.447 [2024-11-20 16:20:00.071284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.447 [2024-11-20 16:20:00.071301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.447 [2024-11-20 16:20:00.071308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.447 [2024-11-20 16:20:00.071485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.447 [2024-11-20 16:20:00.071663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.447 [2024-11-20 16:20:00.071672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.447 [2024-11-20 16:20:00.071679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.447 [2024-11-20 16:20:00.071686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.447 [2024-11-20 16:20:00.084103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.447 [2024-11-20 16:20:00.084393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.447 [2024-11-20 16:20:00.084410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.447 [2024-11-20 16:20:00.084417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.447 [2024-11-20 16:20:00.084595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.447 [2024-11-20 16:20:00.084774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.447 [2024-11-20 16:20:00.084786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.447 [2024-11-20 16:20:00.084793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.447 [2024-11-20 16:20:00.084799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.447 [2024-11-20 16:20:00.097173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.447 [2024-11-20 16:20:00.097616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.447 [2024-11-20 16:20:00.097633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.447 [2024-11-20 16:20:00.097641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.447 [2024-11-20 16:20:00.097818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.447 [2024-11-20 16:20:00.098001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.447 [2024-11-20 16:20:00.098010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.098018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.098024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.110277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.110575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.110592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.110601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.110779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.110963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.110973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.110979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.110986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.123405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.123742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.123759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.123767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.123945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.124129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.124139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.124145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.124155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.136544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.136981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.136999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.137007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.137184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.137362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.137371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.137379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.137385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.149615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.150072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.150090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.150098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.150275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.150454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.150463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.150470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.150478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.162685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.163088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.163106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.163114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.163291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.163469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.163478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.163485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.163491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.175869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.176313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.176330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.176337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.176515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.176693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.176701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.176708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.176714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.188912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.189363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.189381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.189389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.189566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.189745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.189754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.189761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.189768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.201961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.202394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.202410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.202418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.202595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.202772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.202781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.202788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.202794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.215135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.215553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.215597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.215621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.216155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.216333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.448 [2024-11-20 16:20:00.216342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.448 [2024-11-20 16:20:00.216349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.448 [2024-11-20 16:20:00.216355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.448 [2024-11-20 16:20:00.228200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.448 [2024-11-20 16:20:00.228638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.448 [2024-11-20 16:20:00.228683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.448 [2024-11-20 16:20:00.228706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.448 [2024-11-20 16:20:00.229161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.448 [2024-11-20 16:20:00.229340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.449 [2024-11-20 16:20:00.229348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.449 [2024-11-20 16:20:00.229355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.449 [2024-11-20 16:20:00.229361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.449 [2024-11-20 16:20:00.241385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.449 [2024-11-20 16:20:00.241766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.449 [2024-11-20 16:20:00.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.449 [2024-11-20 16:20:00.241790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.449 [2024-11-20 16:20:00.241968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.449 [2024-11-20 16:20:00.242162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.449 [2024-11-20 16:20:00.242171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.449 [2024-11-20 16:20:00.242177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.449 [2024-11-20 16:20:00.242184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.449 [2024-11-20 16:20:00.254501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.449 [2024-11-20 16:20:00.254907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.449 [2024-11-20 16:20:00.254924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.449 [2024-11-20 16:20:00.254932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.449 [2024-11-20 16:20:00.255116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.449 [2024-11-20 16:20:00.255293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.449 [2024-11-20 16:20:00.255305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.449 [2024-11-20 16:20:00.255312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.449 [2024-11-20 16:20:00.255318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.449 [2024-11-20 16:20:00.267669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.449 [2024-11-20 16:20:00.268095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.449 [2024-11-20 16:20:00.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.449 [2024-11-20 16:20:00.268119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.449 [2024-11-20 16:20:00.268297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.449 [2024-11-20 16:20:00.268476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.449 [2024-11-20 16:20:00.268484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.449 [2024-11-20 16:20:00.268490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.449 [2024-11-20 16:20:00.268497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.709 [2024-11-20 16:20:00.280855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.709 [2024-11-20 16:20:00.281276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.709 [2024-11-20 16:20:00.281293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.709 [2024-11-20 16:20:00.281301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.709 [2024-11-20 16:20:00.281478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.709 [2024-11-20 16:20:00.281657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.281665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.281671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.281678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.294046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.294464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.294481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.294489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.294666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.294843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.294852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.294858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.294867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.307229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.307643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.307660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.307667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.307845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.308030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.308039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.308046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.308052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.320395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.320811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.320828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.320835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.321018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.321196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.321205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.321211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.321218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.333556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.333971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.333988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.333996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.334173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.334352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.334361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.334367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.334373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.346715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.347128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.347145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.347152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.347325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.347518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.347526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.347533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.347539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2888614 Killed "${NVMF_APP[@]}" "$@" 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2890013 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2890013 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2890013 ']' 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.710 16:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.710 [2024-11-20 16:20:00.359890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.360300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.360318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.360325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.360503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.360681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.360689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.360696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.360702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.372960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.373373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.373389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.373396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.373574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.373752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.373760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.373767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.373773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.386158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.710 [2024-11-20 16:20:00.386514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.710 [2024-11-20 16:20:00.386531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.710 [2024-11-20 16:20:00.386538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.710 [2024-11-20 16:20:00.386716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.710 [2024-11-20 16:20:00.386896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.710 [2024-11-20 16:20:00.386905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.710 [2024-11-20 16:20:00.386911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.710 [2024-11-20 16:20:00.386918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.710 [2024-11-20 16:20:00.399297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.399734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.399751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.399758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.399935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.400118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.400127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.400134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.400140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.410408] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:59.711 [2024-11-20 16:20:00.410448] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.711 [2024-11-20 16:20:00.412339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.412778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.412795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.412803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.412986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.413165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.413174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.413181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.413187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.425473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.425911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.425929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.425936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.426120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.426298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.426307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.426314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.426321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.438666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.439100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.439119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.439127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.439306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.439484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.439493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.439500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.439506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.451722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.452137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.452160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.452168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.452347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.452525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.452534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.452541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.452548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.464919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.465361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.465378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.465386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.465564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.465741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.465750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.465757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.465764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.477977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.478415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.478433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.478440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.478618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.478796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.478805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.478812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.478820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.488785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:59.711 [2024-11-20 16:20:00.491045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.491416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.491434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.491442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.491622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.491801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.491811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.491818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.491825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.504229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.504615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.504634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.504642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.504820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.505004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.711 [2024-11-20 16:20:00.505014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.711 [2024-11-20 16:20:00.505021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.711 [2024-11-20 16:20:00.505027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.711 [2024-11-20 16:20:00.517374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.711 [2024-11-20 16:20:00.517789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.711 [2024-11-20 16:20:00.517805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.711 [2024-11-20 16:20:00.517813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.711 [2024-11-20 16:20:00.517998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.711 [2024-11-20 16:20:00.518177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.712 [2024-11-20 16:20:00.518187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.712 [2024-11-20 16:20:00.518195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.712 [2024-11-20 16:20:00.518201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.712 [2024-11-20 16:20:00.530569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.712 [2024-11-20 16:20:00.530713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.712 [2024-11-20 16:20:00.530736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.712 [2024-11-20 16:20:00.530743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.712 [2024-11-20 16:20:00.530749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.712 [2024-11-20 16:20:00.530754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.712 [2024-11-20 16:20:00.530990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.712 [2024-11-20 16:20:00.531009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.712 [2024-11-20 16:20:00.531017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.712 [2024-11-20 16:20:00.531195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.712 [2024-11-20 16:20:00.531373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.712 [2024-11-20 16:20:00.531382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.712 [2024-11-20 16:20:00.531388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.712 [2024-11-20 16:20:00.531395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.712 [2024-11-20 16:20:00.532165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.712 [2024-11-20 16:20:00.532268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.712 [2024-11-20 16:20:00.532269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.972 [2024-11-20 16:20:00.543771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.972 [2024-11-20 16:20:00.544229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.972 [2024-11-20 16:20:00.544249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.972 [2024-11-20 16:20:00.544257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.972 [2024-11-20 16:20:00.544435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.972 [2024-11-20 16:20:00.544615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.972 [2024-11-20 16:20:00.544624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.972 [2024-11-20 16:20:00.544631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.972 [2024-11-20 16:20:00.544638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.972 [2024-11-20 16:20:00.556824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.972 [2024-11-20 16:20:00.557205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.972 [2024-11-20 16:20:00.557224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.972 [2024-11-20 16:20:00.557232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.972 [2024-11-20 16:20:00.557409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.972 [2024-11-20 16:20:00.557589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.972 [2024-11-20 16:20:00.557598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.972 [2024-11-20 16:20:00.557605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.972 [2024-11-20 16:20:00.557612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.972 [2024-11-20 16:20:00.569984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.972 [2024-11-20 16:20:00.570417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.972 [2024-11-20 16:20:00.570442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.972 [2024-11-20 16:20:00.570451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.972 [2024-11-20 16:20:00.570630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.972 [2024-11-20 16:20:00.570808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.972 [2024-11-20 16:20:00.570817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.972 [2024-11-20 16:20:00.570824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.972 [2024-11-20 16:20:00.570831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.972 [2024-11-20 16:20:00.583042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.972 [2024-11-20 16:20:00.583437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.972 [2024-11-20 16:20:00.583456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.972 [2024-11-20 16:20:00.583465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.583643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.583822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.583830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.583838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.583845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.596231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.596631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.596649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.596658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.596837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.597021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.597030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.597037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.597044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.609409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.609823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.609840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.609848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.610034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.610211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.610220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.610227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.610233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.622620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.623036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.623054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.623062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.623240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.623417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.623426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.623433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.623439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.635815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.636232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.636249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.636257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.636434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.636613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.636622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.636629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.636635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.648861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.649240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.649257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.649265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.649443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.649621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.649633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.649640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.649646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.662039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.662438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.662455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.662462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.662640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.662817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.662826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.662832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.662838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 [2024-11-20 16:20:00.675246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.675659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.675676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.675683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.675861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.676045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.676054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.973 [2024-11-20 16:20:00.676061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.973 [2024-11-20 16:20:00.676067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.973 4933.67 IOPS, 19.27 MiB/s [2024-11-20T15:20:00.810Z] [2024-11-20 16:20:00.688335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.973 [2024-11-20 16:20:00.688681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.973 [2024-11-20 16:20:00.688698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.973 [2024-11-20 16:20:00.688705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.973 [2024-11-20 16:20:00.688883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.973 [2024-11-20 16:20:00.689072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.973 [2024-11-20 16:20:00.689082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.689089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.689099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.701484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.701940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.701963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.701970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.702148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.702327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.702336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.702343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.702350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.714564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.714992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.715010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.715018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.715196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.715374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.715383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.715389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.715395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.727756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.728185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.728202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.728210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.728388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.728567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.728576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.728582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.728588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.740826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.741261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.741283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.741290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.741468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.741647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.741655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.741662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.741668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.754098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.754514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.754531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.754538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.754717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.754895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.754904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.754911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.754917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.767297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.767650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.767666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.767673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.767850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.768032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.768041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.768048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.768054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.780418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.780829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.780846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.780853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.781038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.781220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.781229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.781235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.781242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:59.974 [2024-11-20 16:20:00.793471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:59.974 [2024-11-20 16:20:00.793879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.974 [2024-11-20 16:20:00.793895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:26:59.974 [2024-11-20 16:20:00.793903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:26:59.974 [2024-11-20 16:20:00.794084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:26:59.974 [2024-11-20 16:20:00.794263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:59.974 [2024-11-20 16:20:00.794272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:59.974 [2024-11-20 16:20:00.794278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:59.974 [2024-11-20 16:20:00.794284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.234 [2024-11-20 16:20:00.806660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.234 [2024-11-20 16:20:00.807070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.234 [2024-11-20 16:20:00.807088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.234 [2024-11-20 16:20:00.807096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.234 [2024-11-20 16:20:00.807274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.234 [2024-11-20 16:20:00.807453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.234 [2024-11-20 16:20:00.807462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.234 [2024-11-20 16:20:00.807468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.234 [2024-11-20 16:20:00.807474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.234 [2024-11-20 16:20:00.819858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.820270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.820288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.820295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.820473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.820650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.820663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.820670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.820676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.833034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.833448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.833464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.833472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.833650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.833827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.833836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.833842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.833848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.846219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.846631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.846648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.846656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.846833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.847015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.847024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.847031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.847037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.859401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.859810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.859827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.859835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.860015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.860194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.860202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.860209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.860219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.872596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.873011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.873028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.873035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.873212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.873391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.873399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.873406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.873412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.885799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.886232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.886250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.886257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.886435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.886612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.886621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.886627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.886634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.898997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.899404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.899421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.899428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.899604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.899781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.899790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.899797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.899803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.912350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.912765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.912786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.912794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.912976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.913155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.913164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.913171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.913177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.925553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.925966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.925984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.925992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.926170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.926348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.926356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.926363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.235 [2024-11-20 16:20:00.926370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.235 [2024-11-20 16:20:00.938746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.235 [2024-11-20 16:20:00.939121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.235 [2024-11-20 16:20:00.939139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.235 [2024-11-20 16:20:00.939147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.235 [2024-11-20 16:20:00.939326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.235 [2024-11-20 16:20:00.939504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.235 [2024-11-20 16:20:00.939514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.235 [2024-11-20 16:20:00.939522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:00.939529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:00.951903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:00.952322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:00.952339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:00.952347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:00.952528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:00.952708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:00.952717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:00.952725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:00.952731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:00.964942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:00.965355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:00.965372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:00.965379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:00.965558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:00.965734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:00.965742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:00.965749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:00.965755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:00.978145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:00.978529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:00.978547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:00.978554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:00.978733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:00.978911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:00.978920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:00.978927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:00.978934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:00.991327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:00.991737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:00.991754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:00.991762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:00.991940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:00.992123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:00.992135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:00.992141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:00.992148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:01.004525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:01.004864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:01.004881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:01.004889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:01.005071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:01.005250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:01.005259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:01.005266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:01.005272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:01.017642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:01.018053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:01.018071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:01.018079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:01.018256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:01.018435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:01.018444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:01.018450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:01.018456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:01.030829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:01.031219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:01.031236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:01.031244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:01.031421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:01.031597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:01.031606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:01.031613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:01.031622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:01.044001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:01.044435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:01.044452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:01.044460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:01.044637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:01.044816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:01.044825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:01.044831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:01.044838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.236 [2024-11-20 16:20:01.057055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.236 [2024-11-20 16:20:01.057480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.236 [2024-11-20 16:20:01.057497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.236 [2024-11-20 16:20:01.057505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.236 [2024-11-20 16:20:01.057682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.236 [2024-11-20 16:20:01.057860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.236 [2024-11-20 16:20:01.057869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.236 [2024-11-20 16:20:01.057875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.236 [2024-11-20 16:20:01.057881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.496 [2024-11-20 16:20:01.070116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.496 [2024-11-20 16:20:01.070530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.496 [2024-11-20 16:20:01.070546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.496 [2024-11-20 16:20:01.070554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.496 [2024-11-20 16:20:01.070733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.496 [2024-11-20 16:20:01.070911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.496 [2024-11-20 16:20:01.070919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.496 [2024-11-20 16:20:01.070926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.496 [2024-11-20 16:20:01.070932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.083169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.083608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.083628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.083635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.083813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.083996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.084005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.084011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.084018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.096240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.096678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.096696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.096703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.096881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.097063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.097073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.097079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.097085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.109300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.109735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.109753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.109760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.109938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.110121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.110130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.110137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.110143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.122339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.122771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.122788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.122796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.122984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.123161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.123170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.123176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.123182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.135520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.135894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.135912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.135920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.136102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.136300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.136309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.136315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.136322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.148698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.149061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.149078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.149086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.149263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.149443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.149452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.149458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.149464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.161857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.162205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.162223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.162230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.162409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.162588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.162600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.162607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.162614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.175001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.175336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.175353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.175360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.175537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.175716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.175725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.175731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.175737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.188135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.188425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.188443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.188451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.188631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.188810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.188819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.188827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.497 [2024-11-20 16:20:01.188833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.497 [2024-11-20 16:20:01.201233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.497 [2024-11-20 16:20:01.201571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.497 [2024-11-20 16:20:01.201588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.497 [2024-11-20 16:20:01.201595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.497 [2024-11-20 16:20:01.201773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.497 [2024-11-20 16:20:01.201957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.497 [2024-11-20 16:20:01.201968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.497 [2024-11-20 16:20:01.201976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.201987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 [2024-11-20 16:20:01.214396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.214829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.214846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.214853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.215035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.215214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.215223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.215229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.215235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 [2024-11-20 16:20:01.227469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.227814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.227835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.227843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.228025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.228204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.228213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.228219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.228226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.498 [2024-11-20 16:20:01.240607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.240969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.240987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.240994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.241171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.241349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.241358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.241367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.241374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 [2024-11-20 16:20:01.253756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.254062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.254079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.254087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.254265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.254443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.254452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.254458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.254464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 [2024-11-20 16:20:01.266865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.267163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.267180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.267188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.267364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.267543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.267552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.267558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.267565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.498 [2024-11-20 16:20:01.279976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.280162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.498 [2024-11-20 16:20:01.280268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.280285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.280293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.280470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.280649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.280661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.280667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.280674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.498 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.498 [2024-11-20 16:20:01.293083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.293383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.293400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.293407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.293585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.293763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.293772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.293778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.293785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 [2024-11-20 16:20:01.306190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.306543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.306560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.306567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.306744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.498 [2024-11-20 16:20:01.306923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.498 [2024-11-20 16:20:01.306932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.498 [2024-11-20 16:20:01.306938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.498 [2024-11-20 16:20:01.306945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.498 [2024-11-20 16:20:01.319341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.498 [2024-11-20 16:20:01.319618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.498 [2024-11-20 16:20:01.319636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.498 [2024-11-20 16:20:01.319643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.498 [2024-11-20 16:20:01.319821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.499 [2024-11-20 16:20:01.320009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.499 [2024-11-20 16:20:01.320018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.499 [2024-11-20 16:20:01.320025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.499 [2024-11-20 16:20:01.320031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.499 Malloc0 00:27:00.499 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.499 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:00.499 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.499 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.757 [2024-11-20 16:20:01.332416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.757 [2024-11-20 16:20:01.332714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.757 [2024-11-20 16:20:01.332731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251e500 with addr=10.0.0.2, port=4420 00:27:00.757 [2024-11-20 16:20:01.332739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e500 is same with the state(6) to be set 00:27:00.757 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.757 [2024-11-20 16:20:01.332916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251e500 (9): Bad file descriptor 00:27:00.757 [2024-11-20 16:20:01.333100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:00.757 [2024-11-20 16:20:01.333111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:00.757 [2024-11-20 16:20:01.333118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:00.757 [2024-11-20 16:20:01.333125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:00.757 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:00.757 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.757 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.758 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.758 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.758 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.758 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:00.758 [2024-11-20 16:20:01.344311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.758 [2024-11-20 16:20:01.345514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:00.758 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.758 16:20:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2889042 00:27:00.758 [2024-11-20 16:20:01.371107] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:01.951 4713.86 IOPS, 18.41 MiB/s [2024-11-20T15:20:03.724Z] 5523.12 IOPS, 21.57 MiB/s [2024-11-20T15:20:05.098Z] 6141.78 IOPS, 23.99 MiB/s [2024-11-20T15:20:06.033Z] 6625.90 IOPS, 25.88 MiB/s [2024-11-20T15:20:06.968Z] 7020.55 IOPS, 27.42 MiB/s [2024-11-20T15:20:07.904Z] 7364.92 IOPS, 28.77 MiB/s [2024-11-20T15:20:08.841Z] 7653.92 IOPS, 29.90 MiB/s [2024-11-20T15:20:09.775Z] 7923.07 IOPS, 30.95 MiB/s [2024-11-20T15:20:09.775Z] 8128.67 IOPS, 31.75 MiB/s 00:27:08.938 Latency(us) 00:27:08.938 [2024-11-20T15:20:09.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.938 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:08.938 Verification LBA range: start 0x0 length 0x4000 00:27:08.938 Nvme1n1 : 15.01 8132.89 31.77 12556.63 0.00 6166.61 441.66 14816.83 00:27:08.938 [2024-11-20T15:20:09.775Z] =================================================================================================================== 00:27:08.938 [2024-11-20T15:20:09.775Z] Total : 8132.89 31.77 12556.63 0.00 6166.61 441.66 14816.83 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.197 rmmod nvme_tcp 00:27:09.197 rmmod nvme_fabrics 00:27:09.197 rmmod nvme_keyring 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2890013 ']' 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2890013 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2890013 ']' 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2890013 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.197 16:20:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890013 00:27:09.197 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.197 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.197 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890013' 00:27:09.197 killing process with pid 2890013 00:27:09.197 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2890013 00:27:09.197 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2890013 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.456 16:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.993 00:27:11.993 real 0m26.140s 00:27:11.993 user 1m1.248s 00:27:11.993 sys 0m6.684s 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.993 ************************************ 00:27:11.993 END TEST nvmf_bdevperf 00:27:11.993 ************************************ 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.993 ************************************ 00:27:11.993 START TEST nvmf_target_disconnect 00:27:11.993 ************************************ 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:11.993 * Looking for test storage... 00:27:11.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:11.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.993 --rc genhtml_branch_coverage=1 00:27:11.993 --rc genhtml_function_coverage=1 00:27:11.993 --rc genhtml_legend=1 00:27:11.993 --rc geninfo_all_blocks=1 00:27:11.993 --rc geninfo_unexecuted_blocks=1 00:27:11.993 00:27:11.993 ' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:11.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.993 --rc genhtml_branch_coverage=1 00:27:11.993 --rc genhtml_function_coverage=1 00:27:11.993 --rc genhtml_legend=1 00:27:11.993 --rc geninfo_all_blocks=1 00:27:11.993 --rc geninfo_unexecuted_blocks=1 00:27:11.993 00:27:11.993 ' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:11.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.993 --rc genhtml_branch_coverage=1 00:27:11.993 --rc genhtml_function_coverage=1 00:27:11.993 --rc genhtml_legend=1 00:27:11.993 --rc geninfo_all_blocks=1 00:27:11.993 --rc geninfo_unexecuted_blocks=1 00:27:11.993 00:27:11.993 ' 00:27:11.993 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:11.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.993 --rc genhtml_branch_coverage=1 00:27:11.993 --rc genhtml_function_coverage=1 00:27:11.993 --rc genhtml_legend=1 00:27:11.993 --rc geninfo_all_blocks=1 00:27:11.993 --rc geninfo_unexecuted_blocks=1 00:27:11.993 00:27:11.994 ' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.994 16:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.349 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:17.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:17.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:17.350 Found net devices under 0000:86:00.0: cvl_0_0 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:17.350 Found net devices under 0000:86:00.1: cvl_0_1 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.350 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:17.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:27:17.610 00:27:17.610 --- 10.0.0.2 ping statistics --- 00:27:17.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.610 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:17.610 00:27:17.610 --- 10.0.0.1 ping statistics --- 00:27:17.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.610 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:17.610 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:17.870 ************************************ 00:27:17.870 START TEST nvmf_target_disconnect_tc1 00:27:17.870 ************************************ 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:17.870 [2024-11-20 16:20:18.615105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.870 [2024-11-20 16:20:18.615167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780ab0 with addr=10.0.0.2, port=4420 00:27:17.870 [2024-11-20 16:20:18.615201] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:17.870 [2024-11-20 16:20:18.615211] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:17.870 [2024-11-20 16:20:18.615218] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:17.870 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:17.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:17.870 Initializing NVMe Controllers 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.870 00:27:17.870 real 0m0.126s 00:27:17.870 user 0m0.043s 00:27:17.870 sys 0m0.080s 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:17.870 ************************************ 00:27:17.870 END TEST nvmf_target_disconnect_tc1 00:27:17.870 ************************************ 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:17.870 ************************************ 00:27:17.870 START TEST nvmf_target_disconnect_tc2 00:27:17.870 ************************************ 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.870 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2895082 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2895082 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2895082 ']' 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.130 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.130 [2024-11-20 16:20:18.756872] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:27:18.130 [2024-11-20 16:20:18.756916] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.130 [2024-11-20 16:20:18.833678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.130 [2024-11-20 16:20:18.877480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.130 [2024-11-20 16:20:18.877518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.130 [2024-11-20 16:20:18.877525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.130 [2024-11-20 16:20:18.877532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.130 [2024-11-20 16:20:18.877537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.130 [2024-11-20 16:20:18.879118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:18.130 [2024-11-20 16:20:18.879226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:18.130 [2024-11-20 16:20:18.879356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.130 [2024-11-20 16:20:18.879357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:18.389 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.389 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:18.389 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.389 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.389 16:20:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.389 Malloc0 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.389 [2024-11-20 16:20:19.063715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.389 [2024-11-20 16:20:19.091988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.389 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2895213 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:18.390 16:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.290 16:20:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2895082 00:27:20.290 16:20:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 [2024-11-20 16:20:21.119470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Write completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.290 starting I/O failed 00:27:20.290 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 [2024-11-20 16:20:21.119688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 [2024-11-20 16:20:21.119886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Write completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 Read completed with error (sct=0, sc=8) 00:27:20.291 starting I/O failed 00:27:20.291 [2024-11-20 16:20:21.120092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.291 [2024-11-20 16:20:21.120212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.120235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.120472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.120482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.120570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.120580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.120730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.120739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.120818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.120827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.120907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.120916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.121024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.121034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.121100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.121109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.121180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.121190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.291 [2024-11-20 16:20:21.121267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.291 [2024-11-20 16:20:21.121277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.291 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.121979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.121989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.122814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.122823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.292 [2024-11-20 16:20:21.123836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.292 qpair failed and we were unable to recover it. 00:27:20.292 [2024-11-20 16:20:21.123987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.123998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.124958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.124968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.125866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.125875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.126010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.126021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.126077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.126087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.126226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.126237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.126309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.126318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.126392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.126401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.570 [2024-11-20 16:20:21.126470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.570 [2024-11-20 16:20:21.126480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.570 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.126545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.126554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.126691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.126700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.126786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.126795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.126856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.126865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.126930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.126940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.127987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.127997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.128843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.128992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.129890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.129899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.571 qpair failed and we were unable to recover it. 00:27:20.571 [2024-11-20 16:20:21.130047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.571 [2024-11-20 16:20:21.130057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.130917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.130926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.131969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.131979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.132881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.132895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.572 [2024-11-20 16:20:21.133703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.572 qpair failed and we were unable to recover it. 00:27:20.572 [2024-11-20 16:20:21.133833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.133847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.133926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.133939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.134959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.134973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.135962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.135981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.136858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.136994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.573 [2024-11-20 16:20:21.137889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.573 qpair failed and we were unable to recover it. 00:27:20.573 [2024-11-20 16:20:21.137956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.137971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.138072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.138085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.138301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.138517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.138531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.138617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.138631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.138693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.138706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.138861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.138875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.139849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.139863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.140058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.140219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.140355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.140589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.140754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.140897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.140997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.141903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.141918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.142001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.142016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.142089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.142103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.142265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.142279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.142429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.142461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.142580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.142613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.142781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.142813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.143005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.143038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.143339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.574 [2024-11-20 16:20:21.143371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.574 qpair failed and we were unable to recover it. 00:27:20.574 [2024-11-20 16:20:21.143578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.143610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.143870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.143991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.144025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.144291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.144323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.144504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.144536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.144715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.144747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.144915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.144956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.145190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.145208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.145299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.145337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.145543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.145650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.145682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.145804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.145836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.146023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.146057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.146367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.146409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.146654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.146675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.146828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.146846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.147938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.147962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.148196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.148213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.148374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.148407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.148599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.148629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.148754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.148785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.148988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.149021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.149146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.575 [2024-11-20 16:20:21.149176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.575 qpair failed and we were unable to recover it. 00:27:20.575 [2024-11-20 16:20:21.149438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.149470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.149724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.149755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.149990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.150022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.150258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.150301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.150507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.150525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.150619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.150637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.150747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.150764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.150998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.151017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.151210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.151228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.151318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.151336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.151425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.151671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.151692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.151795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.151813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.152031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.152049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.152152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.152170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.152406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.152423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.152578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.152602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.152783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.152808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.152978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.153003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.153203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.153227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.153480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.153504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.153721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.153836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.153860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.154103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.154129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.154226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.154249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.154355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.154379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.154551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.154574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.154740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.154764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.154937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.154980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.155159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.155189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.155447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.155479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.155599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.155630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.155797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.155828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.155961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.155994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.156122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.156146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.156366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.156390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.576 [2024-11-20 16:20:21.156547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.576 [2024-11-20 16:20:21.156572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.576 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.156735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.156759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.156992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.157030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.157144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.157174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.157352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.157383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.157553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.157764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.157795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.157913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.157944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.158057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.158089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.158273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.158304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.158473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.158656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.158686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.158808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.158839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.159025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.159057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.159289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.159314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.159497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.159521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.159611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.159635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.159913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.159937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.160055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.160080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.160175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.160199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.160452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.160483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.160620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.160651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.160785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.160816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.161075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.161106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.161297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.161329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.161448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.161478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.161687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.161717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.161842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.161873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.162008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.162040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.162152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.162189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.162314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.162346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.162451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.162483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.162686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.162718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.162906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.162937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.163063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.163096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.577 [2024-11-20 16:20:21.163264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.577 [2024-11-20 16:20:21.163295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.577 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.163402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.163432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.163602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.163633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.163799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.163829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.163968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.164001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.164210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.164241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.164413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.164445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.164647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.164679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.164856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.164887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.165009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.165042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.165177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.165208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.165385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.165416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.165652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.165682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.165863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.165895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.166030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.166084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.166274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.166305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.166419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.166450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.166563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.166593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.166785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.166814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.166926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.166968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.167213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.167243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.167357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.167387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.167657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.167689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.167793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.167823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.168003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.168036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.168139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.168170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.168292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.168324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.168508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.168538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.168645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.168676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.168966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.168999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.169183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.169213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.169320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.169350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.169610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.169642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.169820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.169850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.169989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.170022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.170219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.170250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.170506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.170537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.578 qpair failed and we were unable to recover it. 00:27:20.578 [2024-11-20 16:20:21.170649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.578 [2024-11-20 16:20:21.170680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.170795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.170826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.171003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.171037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.171228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.171260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.171462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.171493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.171675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.171706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.171971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.172004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.172107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.172138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.172311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.172342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.172464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.172495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.172697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.172728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.172906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.172937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.173145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.173177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.173450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.173641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.173672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.173810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.173841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.174057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.174089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.174218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.174249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.174428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.174459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.174571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.174602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.174794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.174826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.175037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.175071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.175196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.175227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.175368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.175400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.175613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.175644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.175841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.175878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.176119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.176152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.176271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.176303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.176493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.176526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.176761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.176795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.177001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.177033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.177213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.177244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.177527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.177559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.177739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.177769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.177906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.177936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.579 [2024-11-20 16:20:21.178159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.579 [2024-11-20 16:20:21.178192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.579 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.178367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.178398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.178518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.178549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.178680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.178711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.178900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.178932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.179122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.179154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.179330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.179361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.179553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.179585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.179706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.179737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.179857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.179888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.180074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.180106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.180229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.180261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.180457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.180488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.180618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.180831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.180863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.180992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.181024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.181152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.181182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.181356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.181392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.181507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.181539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.181710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.181740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.181918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.181955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.182122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.182154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.182324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.182354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.182461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.182491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.182674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.182705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.182818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.182847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.183050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.183082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.183186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.183218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.183352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.183383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.183682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.183713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.183887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.183918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.184116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.184149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.580 [2024-11-20 16:20:21.184415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.580 [2024-11-20 16:20:21.184446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.580 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.184561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.184592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.184833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.184863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.185069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.185102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.185281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.185313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.185504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.185535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.185725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.185756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.185887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.185918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.186114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.186145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.186314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.186344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.186468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.186499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.186615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.186646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.186750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.186780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.186967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.187001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.187171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.187202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.187307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.187338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.187574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.187606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.187711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.187922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.187960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.188157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.188188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.188359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.188389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.188566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.188597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.188699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.188730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.188844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.188875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.189074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.189107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.189219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.189417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.189489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.189692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.189727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.189858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.190033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.190066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.190220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.190252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.190489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.190521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.190639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.190669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.190782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.190814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.191050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.191083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.191195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.191226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.191407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.191439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.581 qpair failed and we were unable to recover it. 00:27:20.581 [2024-11-20 16:20:21.191683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.581 [2024-11-20 16:20:21.191714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.191824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.191856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.192045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.192087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.192328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.192359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.192539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.192570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.192676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.192707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.192890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.192921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.193105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.193136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.193402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.193432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.193540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.193571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.193773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.193804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.194016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.194049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.194221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.194252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.194422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.194453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.194637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.194668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.194784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.194815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.195011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.195045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.195282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.195313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.195480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.195510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.195755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.195786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.195998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.196031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.196294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.196325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.196521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.196552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.196792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.196824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.196926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.196965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.197152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.197183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.197297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.197328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.197517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.197549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.197686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.197896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.197928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.198124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.198156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.198261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.198294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.198533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.198564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.198776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.198808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.199046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.199079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.199230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.582 [2024-11-20 16:20:21.199261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.582 qpair failed and we were unable to recover it. 00:27:20.582 [2024-11-20 16:20:21.199433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.199464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.199645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.199676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.199850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.199881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.200000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.200032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.200162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.200194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.200364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.200395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.200576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.200611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.200855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.200886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.201090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.201123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.201247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.201278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.201401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.201432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.201629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.201660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.201795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.201825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.202106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.202140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.202327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.202357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.202461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.202492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.202672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.202702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.202889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.202920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.203113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.203185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.203420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.203491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.203710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.203746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.204039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.204075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.204342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.204374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.204507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.204538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.204721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.204752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.205009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.205043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.205228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.205260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.205498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.205529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.205634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.205665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.205766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.205797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.205913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.205944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.206120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.206151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.206331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.206361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.206462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.206501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.206740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.206772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.583 [2024-11-20 16:20:21.206892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.583 [2024-11-20 16:20:21.206923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.583 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.207117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.207149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.207332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.207364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.207600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.207632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.207762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.207794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.207925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.207968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.208147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.208178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.208294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.208325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.208525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.208556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.208745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.208776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.208945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.208988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.209166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.209197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.209377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.209407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.209671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.209702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.209821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.209852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.210062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.210095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.210279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.210309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.210481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.210512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.210683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.210715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.210904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.210935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.211075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.211107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.211397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.211579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.211611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.211845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.211875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.212079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.212111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.212315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.212352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.212525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.212556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.212726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.212757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.212888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.212919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.213105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.213175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.213445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.213516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.584 [2024-11-20 16:20:21.213656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.584 [2024-11-20 16:20:21.213690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.584 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.213975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.214180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.214211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.214498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.214708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.214739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.214925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.214966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.215155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.215186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.215422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.215452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.215583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.215614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.215833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.215864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.215973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.216005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.216270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.216301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.216482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.216513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.216702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.216732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.216869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.216900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.217097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.217129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.217376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.217407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.217643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.217674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.217856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.217887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.218117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.218149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.218407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.218438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.218567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.218605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.218798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.218829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.219091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.219123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.219306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.219337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.219539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.219571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.219805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.219836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.220036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.220069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.220195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.220225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.220396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.220427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.220663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.220693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.220968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.221000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.221170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.221202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.221391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.221421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.221553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.221584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.221721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.221752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.585 [2024-11-20 16:20:21.221991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.585 [2024-11-20 16:20:21.222024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.585 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.222254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.222285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.222402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.222432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.222694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.222725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.222861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.222892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.223072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.223104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.223286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.223316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.223518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.223549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.223730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.223760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.223940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.223983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.224169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.224201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.224392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.224422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.224555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.224586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.224771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.224803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.224915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.224946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.225081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.225112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.225236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.225266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.225526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.225556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.225673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.225703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.225940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.225981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.226098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.226129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.226236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.226266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.226446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.226477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.226599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.226630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.226805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.226835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.227007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.227282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.227312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.227554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.227585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.227789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.227820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.227923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.227962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.228090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.228120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.228239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.228269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.228467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.228498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.228620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.228651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.228883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.228913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.586 [2024-11-20 16:20:21.229106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.586 [2024-11-20 16:20:21.229138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.586 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.229244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.229275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.229405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.229437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.229606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.229636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.229846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.229878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.230116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.230149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.230361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.230391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.230640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.230672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.230861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.230890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.231027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.231059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.231184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.231214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.231390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.231420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.231601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.231632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.231768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.231799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.231916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.231945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.232159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.232190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.232379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.232409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.232608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.232640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.232907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.233179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.233210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.233379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.233409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.233652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.233682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.233873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.233905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.234036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.234067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.234313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.234343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.234555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.234586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.234763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.234793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.235002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.235035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.235163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.235194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.235310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.235340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.235574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.235610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.235816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.235940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.235989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.236172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.236203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.236392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.236421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.236629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.236659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.236850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.236880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.587 [2024-11-20 16:20:21.237170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.587 [2024-11-20 16:20:21.237202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.587 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.237458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.237489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.237675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.237913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.238075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.238107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.238341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.238371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.238543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.238575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.238713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.238744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.238958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.238991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.239179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.239210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.239449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.239479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.239715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.239745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.239982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.240015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.240218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.240248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.240385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.240415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.240625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.240656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.240827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.240857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.241033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.241065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.241233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.241264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.241556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.241763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.241795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.241962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.242198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.242229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.242409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.242440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.242678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.242707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.242876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.242907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.243102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.243134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.243319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.243351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.243476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.243506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.243688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.243718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.243963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.243995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.244181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.244325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.244493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.244529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.588 qpair failed and we were unable to recover it. 00:27:20.588 [2024-11-20 16:20:21.244721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.588 [2024-11-20 16:20:21.244751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.244937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.244978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.245236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.245267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.245552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.245582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.245844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.245876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.246016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.246049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.246231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.246262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.246511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.246543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.246725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.246755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.246936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.246978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.247105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.247136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.247388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.247418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.247673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.247704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.247988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.248021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.248233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.248264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.248394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.248425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.248664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.248695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.248897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.249085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.249117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.249327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.249357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.249527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.249558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.249680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.249711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.249907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.250060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.250092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.250330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.250361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.250542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.250572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.250841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.250873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.251132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.251165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.251422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.251453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.251634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.251665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.251843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.251875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.252056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.252088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.589 qpair failed and we were unable to recover it. 00:27:20.589 [2024-11-20 16:20:21.252292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.589 [2024-11-20 16:20:21.252322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.252558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.252589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.252714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.252745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.252858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.252889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.253089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.253121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.253323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.253353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.253601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.253632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.253820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.253861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.254097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.254130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.254389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.254421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.254602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.254633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.254872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.254903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.255040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.255073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.255265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.255296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.255479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.255510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.255618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.255648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.255765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.255796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.255899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.255930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.256062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.256279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.256310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.256410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.256441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.256681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.256713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.256987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.257019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.257311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.257343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.257544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.257574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.257793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.257824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.258019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.258051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.258167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.258197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.258382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.258413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.258603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.258633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.258775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.258806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.258981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.259013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.259192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.259222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.259407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.259438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.259563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.259595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.259778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.259808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.590 qpair failed and we were unable to recover it. 00:27:20.590 [2024-11-20 16:20:21.259934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.590 [2024-11-20 16:20:21.259975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.260238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.260270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.260461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.260493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.260635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.260667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.260783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.260813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.260999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.261031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.261267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.261297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.261478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.261509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.261703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.261734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.261850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.261880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.262064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.262097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.262212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.262248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.262356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.262387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.262639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.262670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.262878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.262909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.263027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.263059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.263239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.263271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.263460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.263490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.263601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.263632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.263830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.263861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.264120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.264152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.264270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.264301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.264421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.264452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.264635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.264666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.264795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.264825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.264962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.264995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.265165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.265196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.265377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.265407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.265531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.265563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.265682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.265713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.265821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.265852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.266037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.266075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.266310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.266341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.266461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.266491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.266670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.266701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.266894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.591 [2024-11-20 16:20:21.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.591 qpair failed and we were unable to recover it. 00:27:20.591 [2024-11-20 16:20:21.267137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.267169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.267382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.267414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.267679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.267712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.267895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.267925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.268123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.268155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.268269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.268300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.268482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.268512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.268752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.268783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.269034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.269066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.269191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.269221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.269346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.269377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.269525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.269776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.269807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.269916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.269969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.270177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.270209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.270328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.270364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.270561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.270592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.270762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.270793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.270973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.271006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.271218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.271249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.271446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.271478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.271606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.271637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.271903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.272116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.272149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.272251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.272282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.272486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.272516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.272689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.272721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.272906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.272937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.273131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.273163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.273348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.273380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.273488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.273519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.273627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.273659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.273842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.273873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.274002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.274034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.274278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.274309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.274481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.274512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.592 qpair failed and we were unable to recover it. 00:27:20.592 [2024-11-20 16:20:21.274753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.592 [2024-11-20 16:20:21.274783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.274909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.274939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.275066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.275098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.275267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.275299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.275492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.275522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.275703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.275734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.275858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.275890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.276157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.276189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.276309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.276340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.276542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.276573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.276701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.276731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.276997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.277030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.277214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.277245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.277442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.277473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.277658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.277689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.277870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.277902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.278024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.278056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.278178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.278209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.278408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.278439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.278563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.278600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.278774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.278805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.279018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.279050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.279170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.279201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.279369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.279400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.279646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.279676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.279854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.279885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.280075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.280108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.280295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.280564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.280595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.280793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.280825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.281010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.281042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.281149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.281181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.281351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.281382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.281562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.281594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.281776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.281807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.281924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.281973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.593 [2024-11-20 16:20:21.282174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.593 [2024-11-20 16:20:21.282206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.593 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.282393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.282424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.282608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.282639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.282823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.282854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.283091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.283124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.283309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.283340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.283602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.283633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.283822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.283853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.284035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.284067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.284187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.284218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.284352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.284384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.284633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.284663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.284840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.284871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.285135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.285168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.285431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.285463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.285636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.285667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.285869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.285900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.286118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.286150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.286407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.286437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.286611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.286641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.286778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.286809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.287044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.287077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.287313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.287344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.287581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.287619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.287878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.287909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.288102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.288134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.288327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.288357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.288487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.288519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.288705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.288736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.288853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.288885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.289142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.289175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.289356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.594 [2024-11-20 16:20:21.289387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.594 qpair failed and we were unable to recover it. 00:27:20.594 [2024-11-20 16:20:21.289572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.289602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.289724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.289930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.289971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.290185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.290216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.290334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.290366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.290497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.290529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.290657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.290687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.290859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.290891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.291071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.291104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.291274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.291305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.291532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.291563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.291744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.291776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.291992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.292025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.292210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.292240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.292421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.292452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.292661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.292691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.292871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.292901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.293099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.293132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.293377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.293408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.293535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.293566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.293711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.293880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.293911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.294125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.294158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.294286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.294317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.294433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.294464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.294651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.294683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.294921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.294961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.295146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.295177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.295363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.295394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.295573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.295605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.295774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.295806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.295979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.296023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.296203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.296235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.296405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.296436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.595 [2024-11-20 16:20:21.296533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.595 [2024-11-20 16:20:21.296565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.595 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.296746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.296777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.297015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.297047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.297233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.297265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.297455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.297486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.297604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.297635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.297813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.297844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.297966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.297999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.298204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.298235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.298493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.298524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.298821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.298852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.299044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.299077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.299190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.299221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.299475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.299506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.299787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.299817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.300003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.300035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.300169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.300200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.300328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.300359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.300533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.300565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.300743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.300774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.300966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.300999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.301257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.301288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.301405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.301435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.301626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.301657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.301926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.301974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.302188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.302219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.302337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.302367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.302604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.302635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.302757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.302788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.302891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.302921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.303061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.303093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.303271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.303302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.303440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.303470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.303580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.303611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.303725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.303756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.303926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.303966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.304132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.304318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.596 [2024-11-20 16:20:21.304349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.596 qpair failed and we were unable to recover it. 00:27:20.596 [2024-11-20 16:20:21.304535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.304566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.304728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.304760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.304935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.304976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.305138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.305326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.305358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.305482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.305514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.305636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.305667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.305852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.305884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.305986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.306018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.306187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.306218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.306356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.306387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.306505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.306535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.306706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.306736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.306910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.306942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.307058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.307089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.307222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.307254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.307418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.307613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.307643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.307823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.307854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.308017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.308049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.308177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.308208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.308315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.308345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.308581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.308612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.308853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.308883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.309060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.309092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.309219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.309251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.309445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.309481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.309673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.309704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.309831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.309862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.310098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.310131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.310371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.310402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.310520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.310550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.310745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.310776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.310889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.310919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.311100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.597 [2024-11-20 16:20:21.311132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.597 qpair failed and we were unable to recover it. 00:27:20.597 [2024-11-20 16:20:21.311333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.311364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.311611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.311641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.311780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.311810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.312097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.312129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.312311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.312342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.312482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.312513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.312756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.312787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.312966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.312998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.313110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.313140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.313327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.313357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.313470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.313500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.313619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.313650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.313817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.313847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.313969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.314001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.314169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.314199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.314336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.314366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.314578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.314608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.314848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.314878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.315145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.315178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.315410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.315441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.315558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.315588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.315699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.315729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.315916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.315955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.316131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.316162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.316426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.316457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.316606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.316723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.316754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.316953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.316985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.317250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.317520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.317690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.317721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.317902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.317939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.318125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.318156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.318285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.318316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.318536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.318568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.598 [2024-11-20 16:20:21.318743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.598 [2024-11-20 16:20:21.318775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.598 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.319033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.319065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.319275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.319307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.319499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.319530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.319716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.319746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.319870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.319900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.320084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.320116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.320299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.320329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.320553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.320584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.320820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.320851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.320982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.321015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.321205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.321235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.321366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.321398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.321518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.321548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.321728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.321758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.321945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.322006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.322179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.322210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.322414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.322444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.322620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.322651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.322851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.322882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.323003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.323035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.323161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.323191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.323366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.323397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.323576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.323606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.323866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.323897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.324073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.324104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.324212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.324242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.324345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.324375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.324558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.324589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.324834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.324865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.325119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.325151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.599 qpair failed and we were unable to recover it. 00:27:20.599 [2024-11-20 16:20:21.325255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.599 [2024-11-20 16:20:21.325286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.325522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.325552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.325654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.325685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.325864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.325895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.326155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.326187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.326380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.326418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.326599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.326629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.326885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.326915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.327044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.327077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.327193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.327223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.327499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.327530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.327637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.327668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.327840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.327870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.328038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.328071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.328255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.328414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.328445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.328762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.328930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.328970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.329097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.329128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.329256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.329287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.329401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.329431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.329600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.329631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.329803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.329834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.330024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.330056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.330333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.330364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.330536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.330567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.330739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.330769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.330941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.330982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.331108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.331140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.331268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.331299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.331479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.331509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.331698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.331728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.331914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.331946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.332147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.332178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.332366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.332396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.600 qpair failed and we were unable to recover it. 00:27:20.600 [2024-11-20 16:20:21.332515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.600 [2024-11-20 16:20:21.332546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.332715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.332745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.332862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.332892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.333112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.333145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.333270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.333301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.333432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.333462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.333577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.333607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.333741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.333771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.333943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.333993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.334201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.334416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.334565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.334595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.334765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.334796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.334903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.334934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.335184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.335216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.335394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.335426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.335604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.335635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.335823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.335854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.336087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.336288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.336318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.336429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.336459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.336667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.336698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.336885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.336916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.337176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.337207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.337357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.337388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.337500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.337530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.337726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.337757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.337890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.337920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.338140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.338324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.338354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.338600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.338631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.338766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.338796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.338975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.339007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.339211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.339331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.339360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.339588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.601 [2024-11-20 16:20:21.339852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.601 [2024-11-20 16:20:21.339883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.601 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.340079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.340111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.340235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.340266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.340511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.340541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.340725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.340756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.341094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.341124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.341304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.341334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.341497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.341528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.341790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.341820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.342002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.342034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.342153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.342183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.342353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.342383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.342565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.342767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.342804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.343043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.343074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.343184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.343215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.343398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.343429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.343680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.343710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.343973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.344005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.344167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.344197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.344373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.344403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.344606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.344636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.344873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.344903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.345094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.345126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.345252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.345283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.345520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.345553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.345723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.345752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.345941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.345982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.346168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.346199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.346406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.346436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.346662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.346693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.346881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.346913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.347136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.347168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.347308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.347339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.347516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.602 [2024-11-20 16:20:21.347547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.602 qpair failed and we were unable to recover it. 00:27:20.602 [2024-11-20 16:20:21.347662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.347693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.347882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.347912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.348182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.348215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.348343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.348374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.348488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.348519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.348783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.348813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.348933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.348974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.349155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.349185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.349422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.349452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.349591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.349622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.349805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.349835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.350039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.350071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.350188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.350219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.350348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.350379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.350578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.350608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.350787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.350818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.351016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.351047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.351312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.351344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.351451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.351487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.351614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.351644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.351782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.351967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.352202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.352233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.352492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.352523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.352693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.352723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.352966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.352998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.353202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.353232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.353418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.353448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.353562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.353592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.353849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.353880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.354066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.354098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.354280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.354310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.354576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.354607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.354841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.354871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.355064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.355096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.355357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.355387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.355512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.603 [2024-11-20 16:20:21.355542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.603 qpair failed and we were unable to recover it. 00:27:20.603 [2024-11-20 16:20:21.355777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.355807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.356049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.356080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.356206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.356237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.356407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.356437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.356615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.356646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.356833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.356864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.356996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.357029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.357198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.357228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.357500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.357532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.357666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.357697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.357881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.357911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.358119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.358151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.358321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.358351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.358537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.358568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.358742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.358772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.359035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.359068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.359188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.359219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.359476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.359507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.359637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.359668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.359791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.359821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.360083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.360114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.360304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.360340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.360493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.360524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.360648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.360678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.360803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.360833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.360956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.360988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.361167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.361198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.361317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.361349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.361583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.361614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.361786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.361817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.604 [2024-11-20 16:20:21.362021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.604 [2024-11-20 16:20:21.362053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.604 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.362235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.362264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.362432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.362463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.362647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.362678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.362863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.362894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.363060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.363092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.363274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.363305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.363585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.363616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.363876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.363906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.364103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.364136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.364370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.364401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.364609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.364640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.364809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.364839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.365016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.365048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.365230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.365260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.365484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.365656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.365687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.365884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.365914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.366167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.366285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.366315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.366432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.366462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.366701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.366731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.366929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.366969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.367143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.367174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.367296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.367327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.367615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.367646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.367881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.367912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.368100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.368132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.368398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.368429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.368558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.368588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.368776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.368806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.368992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.369030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.369274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.369305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.369428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.369458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.369633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.369663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.369784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.369813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.370014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.370048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.605 [2024-11-20 16:20:21.370288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.605 [2024-11-20 16:20:21.370320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.605 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.370505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.370537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.370657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.370688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.370925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.371081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.371112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.371214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.371244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.371432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.371464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.371647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.371679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.371809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.371840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.372081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.372114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.372301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.372334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.372457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.372489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.372695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.372727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.372840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.372873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.373120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.373154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.373265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.373298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.373407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.373439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.373562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.373594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.373775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.373810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.373993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.374026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.374178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.374213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.374441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.374512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.374799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.374835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.375098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.375133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.375261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.375294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.375474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.375505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.375712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.375963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.376207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.376239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.376499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.376531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.376795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.376826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.377102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.377220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.377250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.377429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.377460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.377634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.377674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.377794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.377825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.378118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.606 qpair failed and we were unable to recover it. 00:27:20.606 [2024-11-20 16:20:21.378250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.606 [2024-11-20 16:20:21.378282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.378460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.378491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.378687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.378720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.378894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.378925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.379192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.379225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.379406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.379437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.379570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.379601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.379718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.379750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.379958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.379991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.380169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.380200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.380325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.380356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.380546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.380577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.380750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.380781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.380911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.380943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.381080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.381112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.381240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.381271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.381441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.381472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.381582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.381615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.381795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.381826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.382009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.382041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.382224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.382254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.382432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.382465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.382588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.382618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.382732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.382763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.382889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.382941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.383163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.383201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.383434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.383623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.383672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.383882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.383915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.384191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.384227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.384351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.384384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.384569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.384600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.384725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.384758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.384880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.384920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.385078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.385125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.385321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.385368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.385483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.385515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.607 [2024-11-20 16:20:21.385636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.607 [2024-11-20 16:20:21.385676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.607 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.385863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.385898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.386061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.386096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.386355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.386392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.386501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.386532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.386720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.386769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.386991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.387038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.387284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.387333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.387483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.387527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.387686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.387729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.387930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.387988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.388201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.897 [2024-11-20 16:20:21.388237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.897 qpair failed and we were unable to recover it. 00:27:20.897 [2024-11-20 16:20:21.388476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.388507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.388714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.388746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.388881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.388913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.389118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.389150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.389360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.389392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.389516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.389546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.389795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.389826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.390020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.390054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.390167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.390199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.390305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.390337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.390468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.390499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.390628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.390658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.390894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.390925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.391121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.391155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.391332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.391510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.391565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.391765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.391800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.391926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.391965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.392086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.392116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.392336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.392535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.392567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.392758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.392790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.392974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.393016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.393205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.393237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.393407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.393438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.393630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.393665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.393864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.393894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.394087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.394120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.394238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.394285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.394506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.394691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.394724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.394906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.394937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.395206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.395240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.395371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.395403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.395537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.898 [2024-11-20 16:20:21.395569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.898 qpair failed and we were unable to recover it. 00:27:20.898 [2024-11-20 16:20:21.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.395848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.395972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.396005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.396130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.396163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.396343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.396375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.396553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.396585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.396795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.396828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.396933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.396976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.397160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.397192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.397437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.397468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.397649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.397681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.397804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.397836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.398073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.398106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.398241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.398274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.398394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.398424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.398619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.398651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.398893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.398925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.399049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.399083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.399285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.399317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.399480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.399511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.399641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.399673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.399875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.399910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.400037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.400070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.400204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.400236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.400477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.400509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.400645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.400940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.401133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.401166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.401353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.401384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.401488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.401518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.401781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.401813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.401973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.402006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.402215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.402245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.402432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.402464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.402629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.402667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.402849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.402880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.403013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.899 [2024-11-20 16:20:21.403045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.899 qpair failed and we were unable to recover it. 00:27:20.899 [2024-11-20 16:20:21.403226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.403257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.403498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.403528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.403709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.403740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.403974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.404007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.404177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.404208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.404380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.404412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.404646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.404677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.404811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.404842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.405018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.405051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.405176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.405209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.405339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.405370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.405552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.405584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.405755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.405786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.405907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.405937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.406071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.406103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.406286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.406316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.406438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.406471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.406641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.406672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.406846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.406878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.407052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.407086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.407261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.407293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.407486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.407517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.407632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.407664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.407851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.407882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.408025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.408063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.408186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.408219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.408336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.408368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.408612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.408645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.408789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.409026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.409059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.409216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.409248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.409455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.409487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.409620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.409652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.409856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.409888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.410068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.900 [2024-11-20 16:20:21.410102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-11-20 16:20:21.410281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.410313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.410551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.410584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.410756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.410788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.410987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.411020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.411261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.411292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.411429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.411461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.411631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.411664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.411851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.411882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.412121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.412153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.412283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.412316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.412439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.412470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.412662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.412693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.412874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.412906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.413031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.413066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.413249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.413281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.413400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.413434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.413624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.413656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.413827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.413860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.413983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.414024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.414194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.414226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.414335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.414367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.414561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.414593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.414707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.414737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.414945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.414985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.415123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.415245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.415277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.415479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.415511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.415750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.415783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.415966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.416000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.416190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.416228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.416408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.416440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.416612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.416645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.416927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.416969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.417155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.417188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.417361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.417393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.417574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.901 [2024-11-20 16:20:21.417607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-11-20 16:20:21.417718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.417749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.417933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.417975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.418103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.418136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.418328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.418359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.418549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.418581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.418823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.418855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.418976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.419008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.419311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.419346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.419533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.419566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.419735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.419768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.419972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.420004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.420193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.420348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.420380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.420505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.420536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.420660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.420692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.420800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.420831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.421101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.421134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.421287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.421484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.421515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.421637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.421669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.421865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.421899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.422045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.422079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.422257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.422289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.422470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.422500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.422635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.422667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.422794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.422827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.422999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.423032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.423221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.423253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.423383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.423414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.423531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.423564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.423674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.902 [2024-11-20 16:20:21.423706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-11-20 16:20:21.423891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.423922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.424170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.424204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.424394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.424432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.424550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.424583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.424815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.424847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.425037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.425071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.425221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.425253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.425429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.425463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.425666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.425698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.425938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.425982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.426171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.426204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.426411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.426442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.426592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.426623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.426754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.426786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.426994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.427027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.427197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.427230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.427355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.427388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.427537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.427568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.427760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.427792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.427986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.428019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.428221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.428254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.428497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.428530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.428661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.428692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.428793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.428825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.429009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.429043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.429176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.429208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.429457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.429489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.429660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.429690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.429888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.429920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.430075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.430108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.430283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.430315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.430577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.430609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.430852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.430885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.431019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.431051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.431277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.431308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.431411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.903 [2024-11-20 16:20:21.431442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.903 qpair failed and we were unable to recover it. 00:27:20.903 [2024-11-20 16:20:21.431574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.431606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.431787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.431817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.431998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.432031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.432217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.432253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.432379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.432411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.432590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.432622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.432804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.432841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.433025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.433057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.433178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.433216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.433487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.433519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.433764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.433796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.434007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.434042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.434157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.434188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.434361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.434392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.434649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.434681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.434795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.434827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.434956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.434988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.435090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.435122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.435391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.435423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.435617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.435649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.435895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.435927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.436062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.436094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.436218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.436250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.436464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.436497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.436686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.436718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.436919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.436959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.437163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.437196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.437370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.437401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.437528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.437560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.437736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.437767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.437883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.437915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.438047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.438080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.904 qpair failed and we were unable to recover it. 00:27:20.904 [2024-11-20 16:20:21.438188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.904 [2024-11-20 16:20:21.438219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.438501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.438532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.438715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.438746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.438864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.438895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.439026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.439058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.439323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.439355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.439479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.439511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.439641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.439673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.439940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.440002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.440173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.440206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.440318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.440349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.440557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.440588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.440772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.440803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.440994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.441027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.441265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.441304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.441428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.441459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.441639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.441671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.441794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.441826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.441944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.441985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.442249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.442281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.442399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.442430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.442641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.442672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.442917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.442958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.443142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.443174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.443350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.443382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.443512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.443555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.443829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.443863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.444155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.444187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.444385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.444427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.444636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.444675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.444904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.444941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.445143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.445175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.445423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.445470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.445657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.445688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.445824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.445856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.446045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.446081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.446296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.446332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.446445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.905 [2024-11-20 16:20:21.446476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.905 qpair failed and we were unable to recover it. 00:27:20.905 [2024-11-20 16:20:21.446659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.446691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.446879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.446910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.447168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.447200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.447321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.447353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.447571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.447605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.447846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.447878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.448006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.448039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.448172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.448204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.448374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.448407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.448624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.448656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.448797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.448830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.449108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.449140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.449379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.449413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.449522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.449554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.449752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.449785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.449906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.449944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.450142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.450174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.450287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.450318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.450514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.450545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.450786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.450823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.451012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.451045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.451153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.451185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.451379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.451410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.451516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.451679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.451711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.451900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.451931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.452133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.452165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.452368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.452402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.452639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.452670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.452858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.453137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.453170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.453297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.453328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.453604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.453636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.453824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.453856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.453988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.454021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.906 qpair failed and we were unable to recover it. 00:27:20.906 [2024-11-20 16:20:21.454196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.906 [2024-11-20 16:20:21.454228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.454342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.454372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.454563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.454595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.454709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.454740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.454876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.454907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.455037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.455071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.455279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.455310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.455523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.455555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.455670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.455702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.455813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.455847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.456099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.456132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.456251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.456284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.456404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.456436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.456558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.456589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.456776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.456807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.456987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.457020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.457197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.457229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.457403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.457441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.457563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.457595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.457712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.457744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.457879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.457917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.458042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.458075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.458316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.458347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.458532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.458564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.458686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.458719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.458895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.458928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.459151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.459184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.459354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.907 [2024-11-20 16:20:21.459386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.907 qpair failed and we were unable to recover it. 00:27:20.907 [2024-11-20 16:20:21.459497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.459530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.459733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.459764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.460001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.460034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.460232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.460262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.460388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.460420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.460617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.460647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.460846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.460879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.461070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.461104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.461284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.461316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.461508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.461540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.461669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.461701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.461867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.461899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.462083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.462116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.462318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.462349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.462557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.462590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.462800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.462832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.463070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.463102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.463310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.463343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.463470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.463501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.463617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.463649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.463773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.463804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.463988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.464021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.464141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.464172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.464283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.464315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.464503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.464536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.464729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.464761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.464961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.464995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.465106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.465138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.465252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.465284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.465552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.465585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.465781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.465814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.465942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.466011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.466141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.466180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.466320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.466351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.466528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.466560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.908 [2024-11-20 16:20:21.466690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.908 [2024-11-20 16:20:21.466722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.908 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.466862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.467037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.467069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.467253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.467285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.467392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.467423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.467595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.467626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.467747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.467779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.468018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.468051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.468249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.468280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.468458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.468490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.468668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.468699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.468875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.468907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.469106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.469139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.469329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.469360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.469602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.469634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.469898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.469929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.470070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.470102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.470220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.470252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.470363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.470395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.470599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.470630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.470765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.470797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.470917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.470957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.471135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.471167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.471338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.471370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.471549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.471622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.471843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.471879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.472074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.472108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.472309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.472340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.472646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.472677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.472865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.472897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.473092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.473125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.473360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.473391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.473515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.473547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.473673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.473705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.473880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.473910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.474034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.474066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.909 qpair failed and we were unable to recover it. 00:27:20.909 [2024-11-20 16:20:21.474331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.909 [2024-11-20 16:20:21.474364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.474543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.474585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.474711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.474744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.474876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.474907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.475048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.475081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.475258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.475288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.475538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.475570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.475761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.475793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.475977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.476011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.476186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.476219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.476433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.476465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.476584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.476616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.476747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.476779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.476967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.477001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.477180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.477211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.477414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.477446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.477561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.477592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.477856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.477888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.478110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.478143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.478264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.478483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.478515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.478794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.478824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.478997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.479030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.479219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.479251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.479363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.479394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.479503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.479534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.479800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.479832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.479970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.480004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.480170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.480241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.480503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.480539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.480742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.480775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.481045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.481190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.910 [2024-11-20 16:20:21.481221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.910 qpair failed and we were unable to recover it. 00:27:20.910 [2024-11-20 16:20:21.481506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.481537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.481712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.481743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.481991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.482025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.482143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.482173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.482288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.482320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.482490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.482523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.482712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.482742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.483017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.483049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.483224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.483265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.483447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.483479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.483720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.483751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.484030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.484064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.484190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.484220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.484390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.484420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.484550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.484580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.484771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.484803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.484977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.485012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.485267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.485297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.485486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.485518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.485653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.485685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.485877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.485911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.486164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.486197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.486409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.486546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.486577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.486767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.486799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.486986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.487018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.487144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.487176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.487414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.487447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.487654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.487697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.487818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.487849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.487972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.488006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.488245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.488276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.488390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.488420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.488609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.911 [2024-11-20 16:20:21.488640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.911 qpair failed and we were unable to recover it. 00:27:20.911 [2024-11-20 16:20:21.488767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.488801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.488970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.489049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.489305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.489342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.489518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.489551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.489748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.489781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.490002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.490036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.490240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.490273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.490467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.490498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.490674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.490705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.490878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.490911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.491129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.491163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.491343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.491375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.491493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.491526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.491787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.491819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.491997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.492040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.492257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.492289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.492530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.492563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.492758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.492789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.492978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.493010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.493131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.493163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.493350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.493382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.493563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.493596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.493844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.493876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.494059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.494092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.494383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.494619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.494834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.494865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.494999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.495032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.912 [2024-11-20 16:20:21.495227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.912 [2024-11-20 16:20:21.495259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.912 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.495443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.495474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.495594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.495625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.495819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.495851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.496029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.496062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.496315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.496346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.496533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.496565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.496750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.496783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.496913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.496945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.497129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.497163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.497426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.497458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.497574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.497608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.497786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.497818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.497938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.497984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.498268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.498491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.498522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.498692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.498726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.498969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.499002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.499279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.499311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.499582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.499613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.499812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.499843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.500038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.500071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.500286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.500320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.500456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.500488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.500667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.500698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.500989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.501268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.501307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.501441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.501472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.501592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.501623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.501906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.501939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.502199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.502231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.502354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.502386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.502567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.502600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.502781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.502813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.503014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.503048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.913 [2024-11-20 16:20:21.503219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.913 [2024-11-20 16:20:21.503251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.913 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.503487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.503520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.503635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.503666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.503849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.503881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.504049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.504082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.504200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.504232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.504340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.504371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.504542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.504574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.504760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.504792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.504966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.505005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.505185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.505217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.505406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.505437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.505616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.505647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.505758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.505789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.505983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.506016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.506141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.506172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.506351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.506383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.506621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.506654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.506851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.506884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.507001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.507210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.507242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.507438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.507470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.507659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.507690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.507896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.507929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.508066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.508102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.508313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.508345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.508530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.508561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.508740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.508772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.508971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.509005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.509197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.509229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.509349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.509380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.509574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.509611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.509730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.509763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.509888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.509920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.510128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.510161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.510296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.510328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.510446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.510478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.510654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.914 [2024-11-20 16:20:21.510686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.914 qpair failed and we were unable to recover it. 00:27:20.914 [2024-11-20 16:20:21.510888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.510919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.511144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.511175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.511305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.511336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.511511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.511542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.511664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.511696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.511921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.512123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.512157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.512347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.512379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.512563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.512595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.512778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.512810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.512984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.513017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.513211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.513242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.513428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.513461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.513563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.513594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.513802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.513834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.513940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.513990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.514177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.514211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.514382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.514414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.514551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.514583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.514754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.514787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.514973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.515008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.515229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.515262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.515500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.515531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.515650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.515680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.515808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.515840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.516019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.516053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.516296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.516327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.516520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.516552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.516687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.516718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.516969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.517176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.517208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.517330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.517363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.517544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.517576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.517759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.517791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.517971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.518005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.518220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.518252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.518476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.915 [2024-11-20 16:20:21.518508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.915 qpair failed and we were unable to recover it. 00:27:20.915 [2024-11-20 16:20:21.518746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.518778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.519018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.519052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.519173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.519204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.519308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.519340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.519582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.519617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.519808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.519840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.519961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.519994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.520132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.520164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.520346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.520378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.520618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.520650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.520849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.520881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.520998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.521037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.521302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.521334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.521512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.521544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.521778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.521810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.521989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.522022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.522140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.522171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.522410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.522442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.522684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.522716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.522900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.522931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.523148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.523181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.523375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.523407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.523559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.523691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.523730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.523970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.524004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.524221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.524255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.524382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.524414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.524544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.524575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.524703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.524735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.524920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.524961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.525147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.525179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.525298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.525329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.525505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.525537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.525707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.525740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.525864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.525896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.526029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.526063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.526262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.526295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.526413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.916 [2024-11-20 16:20:21.526446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.916 qpair failed and we were unable to recover it. 00:27:20.916 [2024-11-20 16:20:21.526619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.526651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.526919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.526976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.527170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.527201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.527321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.527355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.527557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.527590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.527768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.527800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.527975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.528010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.528192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.528223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.528340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.528372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.528547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.528578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.528758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.528791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.528971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.529005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.529214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.529246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.529448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.529481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.529650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.529682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.529867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.529900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.530083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.530117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.530225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.530256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.530360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.530391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.530629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.530661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.530844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.530875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.531056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.531090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.531263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.531295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.531473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.531508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.531645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.531678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.531799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.531846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.532042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.532076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.532267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.532299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.532565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.532598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.532787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.532821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.532941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.532983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.533171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.917 [2024-11-20 16:20:21.533204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.917 qpair failed and we were unable to recover it. 00:27:20.917 [2024-11-20 16:20:21.533319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.533351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.533480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.533511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.533798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.533830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.534030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.534066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.534324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.534358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.534493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.534525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.534686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.534881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.534913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.535093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.535126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.535298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.535330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.535508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.535540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.535806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.535840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.536033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.536067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.536189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.536222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.536345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.536377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.536626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.536658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.536896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.536934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.537197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.537230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.537352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.537384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.537561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.537593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.537784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.537817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.538087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.538121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.538333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.538366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.538570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.538602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.538809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.538841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.538987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.539020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.539285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.539318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.539489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.539521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.539709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.539740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.539915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.539960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.540081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.540113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.540333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.540528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.540559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.540741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.540784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.540977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.541010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.541113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.541145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.541334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.918 [2024-11-20 16:20:21.541366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.918 qpair failed and we were unable to recover it. 00:27:20.918 [2024-11-20 16:20:21.541481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.541512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.541697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.541729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.541908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.541939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.542069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.542101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.542216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.542248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.542536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.542568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.542809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.542841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.542963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.542996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.543266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.543298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.543418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.543452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.543702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.543735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.543861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.543893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.544032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.544065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.544174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.544206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.544392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.544424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.544540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.544571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.544698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.544731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.544943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.544999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.545182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.545214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.545328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.545471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.545501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.545689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.545721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.545902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.545933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.546080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.546114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.546311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.546343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.546529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.546560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.546676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.546708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.546827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.546860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.547117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.547151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.547272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.547304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.547408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.547440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.547621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.547652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.547775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.547806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.547921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.547980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.548101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.548133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.548249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.548280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.548478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.548514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.548778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.919 [2024-11-20 16:20:21.548812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.919 qpair failed and we were unable to recover it. 00:27:20.919 [2024-11-20 16:20:21.548944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.548986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.549183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.549216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.549349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.549381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.549559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.549591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.549771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.549802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.549975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.550129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.550370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.550515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.550672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.550805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.550962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.550995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.551109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.551142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.551382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.551414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.551595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.551629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.551809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.551840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.551969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.552001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.552159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.552339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.552370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.552544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.552577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.552773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.552805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.552908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.552937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.553075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.553108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.553288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.553320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.553493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.553524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.553653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.553685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.553864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.553897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.554098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.554138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.554256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.554287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.554406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.554438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.554557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.554589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.554775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.554808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.554989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.555023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.555264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.555296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.555495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.555529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.555642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.555673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.555856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.555890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.920 [2024-11-20 16:20:21.556093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.920 [2024-11-20 16:20:21.556126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.920 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.556370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.556406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.556539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.556571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.556684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.556716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.556984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.557017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.557192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.557224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.557353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.557385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.557491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.557523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.557711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.557743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.557957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.557990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.558162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.558193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.558308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.558340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.558467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.558499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.558613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.558646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.558757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.558789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.558903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.558936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.559060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.559091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.559261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.559293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.559404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.559437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.559555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.559587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.559691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.559724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.559914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.559945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.560126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.560269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.560412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.560549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.560723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.560873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.560994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.561028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.561141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.561173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.561353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.561385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.561577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.561611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.561813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.561845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.562062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.921 [2024-11-20 16:20:21.562095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.921 qpair failed and we were unable to recover it. 00:27:20.921 [2024-11-20 16:20:21.562360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.562391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.562565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.562597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.562770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.562802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.562922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.562973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.563162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.563197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.563317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.563349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.563469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.563506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.563614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.563651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.563824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.563856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.563976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.564011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.564134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.564169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.564350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.564384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.564582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.564619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.564728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.564760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.564940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.564987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.565112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.565144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.565331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.565364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.565546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.565580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.565758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.565789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.566045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.566079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.566272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.566305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.566418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.566451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.566638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.566670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.566847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.566879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.567050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.567083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.567256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.567288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.567402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.567434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.567567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.567599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.567872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.567906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.568091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.568124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.568331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.568362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.568622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.568654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.568914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.568954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.569146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.569178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.569472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.569506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.569621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.569653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.569775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.922 [2024-11-20 16:20:21.569807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.922 qpair failed and we were unable to recover it. 00:27:20.922 [2024-11-20 16:20:21.570001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.570035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.570273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.570304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.570479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.570513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.570701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.570734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.570998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.571032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.571148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.571180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.571421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.571452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.571568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.571600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.571854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.572015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.572047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.572239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.572277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.572490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.572522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.572708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.572740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.572983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.573016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.573216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.573249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.573430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.573462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.573648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.573680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.573853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.573885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.574080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.574114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.574359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.574393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.574579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.574612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.574809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.574842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.575013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.575047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.575166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.575199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.575464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.575496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.575621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.575652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.575825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.575859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.575977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.576009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.576193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.576225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.576329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.576361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.576545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.576576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.576762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.576795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.576982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.577015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.577262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.577294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.577476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.577510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.577638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.577669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.577783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.577815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.923 [2024-11-20 16:20:21.578009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.923 [2024-11-20 16:20:21.578043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.923 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.578152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.578184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.578424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.578456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.578567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.578599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.578708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.578740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.578857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.578890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.579011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.579044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.579177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.579208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.579425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.579457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.579647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.579679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.579794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.579826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.579957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.579991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.580192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.580224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.580357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.580394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.580513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.580546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.580727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.580760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.581010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.581044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.581169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.581202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.581382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.581415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.581654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.581686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.581924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.581963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.582084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.582115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.582382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.582414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.582540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.582572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.582763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.582794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.583001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.583035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.583224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.583256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.583438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.583470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.583654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.583685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.583796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.583829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.584068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.584102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.584274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.584306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.584479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.584817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.584848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.585031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.585064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.585194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.585227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.585400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.585431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.585631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.585663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.924 qpair failed and we were unable to recover it. 00:27:20.924 [2024-11-20 16:20:21.585834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.924 [2024-11-20 16:20:21.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.585998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.586032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.586262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.586296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.586420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.586452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.586638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.586671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.586858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.586891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.587006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.587037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.587178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.587209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.587335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.587366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.587472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.587504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.587743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.587775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.587887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.587918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.588104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.588137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.588380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.588413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.588522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.588554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.588726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.588763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.588890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.588922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.589136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.589169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.589364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.589396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.589573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.589606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.589788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.589819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.589941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.590112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.590144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.590327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.590358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.590541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.590573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.590715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.590837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.590870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.590986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.591020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.591209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.591241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.591430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.591462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.591569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.591600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.591719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.591752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.591957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.591991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.592226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.592258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.592371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.592402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.592574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.592605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.592783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.925 [2024-11-20 16:20:21.592815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.925 qpair failed and we were unable to recover it. 00:27:20.925 [2024-11-20 16:20:21.592927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.592968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.593142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.593175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.593299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.593331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.593570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.593604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.593794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.593826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.594010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.594044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.594251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.594282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.594495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.594527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.594664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.594695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.594913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.595054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.595087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.595211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.595242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.595420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.595450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.595620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.595652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.595757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.595788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.595984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.596018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.596207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.596238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.596366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.596399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.596579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.596616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.596721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.596752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.596882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.596915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.597099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.597131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.597311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.597342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.597464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.597689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.597721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.597877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.597994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.598028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.598266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.598485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.598516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.598684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.598717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.598905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.598936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.926 qpair failed and we were unable to recover it. 00:27:20.926 [2024-11-20 16:20:21.599211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.926 [2024-11-20 16:20:21.599243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.599509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.599543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.599725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.599758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.599993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.600028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.600216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.600248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.600492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.600694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.600726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.600914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.600945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.601069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.601101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.601279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.601312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.601604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.601636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.601752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.601783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.601885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.601915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.602057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.602089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.602212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.602245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.602430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.602461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.602576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.602607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.602794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.602826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.602953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.602986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.603132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.603165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.603426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.603457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.603634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.603664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.603837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.603869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.604069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.604103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.604342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.604373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.604544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.604576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.604731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.604907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.604944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.605105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.605137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.605420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.605453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.605626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.605657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.605850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.605881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.605997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.606031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.606149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.606181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.606371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.606403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.927 qpair failed and we were unable to recover it. 00:27:20.927 [2024-11-20 16:20:21.606663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.927 [2024-11-20 16:20:21.606694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.606987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.607020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.607132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.607163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.607289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.607320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.607496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.607526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.607694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.607726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.607925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.607967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.608138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.608170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.608468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.608500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.608700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.608732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.608992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.609026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.609165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.609196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.609385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.609417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.609597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.609629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.609754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.609787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.609906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.609938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.610067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.610099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.610343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.610374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.610561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.610594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.610718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.610751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.610875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.610906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.611080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.611115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.611314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.611345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.611583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.611615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.611796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.611828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.612021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.612054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.612239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.612270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.612450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.612483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.612676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.612972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.613007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.613262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.613292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.613478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.613509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.613698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.613736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.613934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.613974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.614091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.614123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.614310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.614343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.928 qpair failed and we were unable to recover it. 00:27:20.928 [2024-11-20 16:20:21.614449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.928 [2024-11-20 16:20:21.614481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.614779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.614811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.614986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.615018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.615131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.615163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.615346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.615378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.615506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.615536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.615707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.615739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.615865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.615896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.616019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.616052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.616238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.616269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.616516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.616548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.616666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.616697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.616875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.616907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.617124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.617157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.617395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.617425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.617528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.617560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.617768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.617799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.618016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.618049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.618236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.618267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.618394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.618426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.618610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.618642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.618770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.618802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.619056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.619089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.619227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.619259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.619409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.619587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.619619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.619723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.619754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.619873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.619903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.620018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.620049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.620266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.620298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.620487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.620518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.620757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.620788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.620902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.620932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.621126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.621159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.621275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.621306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.929 [2024-11-20 16:20:21.621479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.929 [2024-11-20 16:20:21.621510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.929 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.621620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.621657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.621851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.621882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.622071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.622104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.622229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.622260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.622436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.622468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.622582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.622614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.622829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.622860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.623030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.623063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.623200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.623232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.623408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.623440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.623650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.623681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.623876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.623907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.624113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.624144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.624256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.624289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.624487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.624519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.624694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.624725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.624835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.624867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.625012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.625044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.625164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.625195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.625369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.625400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.625525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.625556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.625789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.625821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.625972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.626006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.626179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.626210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.626324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.626355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.626481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.626512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.626759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.626789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.626976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.627019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.627146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.627177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.627423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.627457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.627710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.627740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.627918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.627957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.628212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.628243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.628370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.628403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.628583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.628614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.930 qpair failed and we were unable to recover it. 00:27:20.930 [2024-11-20 16:20:21.628784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.930 [2024-11-20 16:20:21.628815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.629005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.629042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.629217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.629248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.629451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.629483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.629720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.629750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.629924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.629987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.630218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.630250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.630430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.630462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.630646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.630676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.630852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.630887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.631153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.631187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.631292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.631331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.631506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.631537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.631661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.631692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.631831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.631862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.632029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.632349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.632382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.632572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.632613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.632793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.632824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.633073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.633108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.633297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.633328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.633567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.633598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.633837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.633870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.633998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.634031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.634213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.634373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.634404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.634585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.634618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.634858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.634890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.635103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.635135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.635313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.635346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.931 qpair failed and we were unable to recover it. 00:27:20.931 [2024-11-20 16:20:21.635530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.931 [2024-11-20 16:20:21.635562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.635758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.635793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.635974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.636015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.636211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.636245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.636356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.636387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.636574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.636606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.636725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.636757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.636937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.636991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.637136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.637177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.637298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.637334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.637514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.637546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.637734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.637767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.637884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.637916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.638055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.638088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.638258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.638290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.638530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.638567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.638699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.638732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.638921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.638967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.639097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.639130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.639257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.639294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.639488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.639518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.639705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.639737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.639925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.639969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.640093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.640124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.640314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.640347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.640535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.640568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.640697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.640728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.640921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.640963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.641141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.641173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.641352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.641385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.641520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.641551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.641758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.641791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.641980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.642013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.642123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.642156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.642332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.642365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.642549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.642581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.642769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.932 [2024-11-20 16:20:21.642800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.932 qpair failed and we were unable to recover it. 00:27:20.932 [2024-11-20 16:20:21.642984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.643017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.643137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.643167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.643429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.643461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.643639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.643671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.643856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.643887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.644005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.644043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.644170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.644203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.644312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.644344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.644547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.644580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.644816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.644848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.644979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.645011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.645187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.645219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.645407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.645440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.645700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.645731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.645991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.646025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.646215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.646248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.646510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.646541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.646732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.646763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.646977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.647011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.647279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.647311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.647493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.647525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.647696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.647727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.647853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.647885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.648081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.648115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.648283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.648314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.648503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.648534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.648816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.648849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.649026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.649059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.649166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.649199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.649390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.649421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.649708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.649740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.649864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.649897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.650099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.650133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.650252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.650284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.650464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.650501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.650677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.650710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.933 [2024-11-20 16:20:21.650894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.933 [2024-11-20 16:20:21.650927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.933 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.651140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.651174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.651419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.651452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.651584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.651618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.651788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.651820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.651937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.651983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.652163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.652197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.652459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.652490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.652662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.652694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.652817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.652855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.653045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.653079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.653202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.653234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.653453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.653486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.653619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.653651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.653836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.653868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.654040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.654074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.654202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.654234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.654430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.654463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.654647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.654680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.654819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.654850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.655061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.655094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.655228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.655260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.655382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.655415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.655592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.655625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.655805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.655836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.655965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.655998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.656124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.656156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.656361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.656394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.656597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.656630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.656811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.656843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.657027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.657061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.657235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.657268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.657444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.657476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.657647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.657679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.657789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.657821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.657939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.657982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.658129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.658160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.934 [2024-11-20 16:20:21.658338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.934 [2024-11-20 16:20:21.658377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.934 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.658564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.658595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.658776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.658807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.658926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.658968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.659154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.659186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.659393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.659424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.659597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.659630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.659815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.659846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.660036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.660070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.660255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.660464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.660496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.660611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.660643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.660792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.660919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.660964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.661139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.661170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.661283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.661314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.661506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.661537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.661717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.661749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.661853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.661885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.662070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.662104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.662281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.662313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.662439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.662471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.662667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.662699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.662963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.662996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.663166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.663382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.663415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.663606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.663639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.663745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.663777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.663986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.664020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.664210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.664241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.664410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.664441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.664688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.664719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.664986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.665018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.665210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.665242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.665474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.665507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.665767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.665799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.665971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.666003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.666124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.935 [2024-11-20 16:20:21.666155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.935 qpair failed and we were unable to recover it. 00:27:20.935 [2024-11-20 16:20:21.666289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.666322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.666505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.666537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.666766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.666797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.667066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.667099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.667233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.667263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.667452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.667484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.667663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.667694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.667810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.667842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.668084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.668117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.668383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.668415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.668625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.668657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.668925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.668966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.669152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.669184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.669446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.669478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.669659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.669700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.669881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.669913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.670040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.670073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.670195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.670226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.670408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.670633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.670665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.670872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.670905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.671038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.671071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.671254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.671292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.671506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.671538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.671708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.671739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.671924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.671968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.672097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.672128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.672363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.672396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.672522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.672554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.672723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.672755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.672876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.672907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.673041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.673073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.673186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.673217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.673357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.673388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.673560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.673592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.673702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.936 [2024-11-20 16:20:21.673734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.936 qpair failed and we were unable to recover it. 00:27:20.936 [2024-11-20 16:20:21.673850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.673880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.674012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.674045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.674308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.674340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.674553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.674587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.674791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.674822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.675026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.675058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.675255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.675286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.675546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.675578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.675751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.675783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.675904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.675936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.676090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.676122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.676236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.676267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.676452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.676485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.676621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.676653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.676825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.676856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.677093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.677126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.677282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.677544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.677576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.677707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.677745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.677926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.677969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.678222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.678259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.678447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.678479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.678708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.678739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.678925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.678968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.679093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.679124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.679252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.679282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.679398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.679429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.679628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.679659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.679896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.679930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.680130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.680162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.680353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.680384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.680499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.680530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.680657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.680690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.680812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.680844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.681062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.681234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.681265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.681438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.681471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.681658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.681695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.681816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.681847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.682037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.682069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.682324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.937 [2024-11-20 16:20:21.682355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.937 qpair failed and we were unable to recover it. 00:27:20.937 [2024-11-20 16:20:21.682468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.682500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.682688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.682720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.682902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.682933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.683146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.683177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.683375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.683409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.683588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.683619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.683791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.683823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.683997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.684031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.684231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.684262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.684432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.684463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.684743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.684775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.685025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.685058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.685363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.685397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.685575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.685609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.685820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.685851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.686028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.686079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.686372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.686404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.686573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.686615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.686814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.686847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.687037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.687070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.687252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.687283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.687406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.687438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.687698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.687730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.687842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.687873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.688068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.688104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.688235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.688267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.688444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.688478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.688673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.688705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.688999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.689033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.689227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.689260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.689409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.689441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.689566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.689796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.689828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.690007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.690042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.690232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.690263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.690403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.690518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.690548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.690720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.690751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.690884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.690916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.691100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.691132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.691343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.938 [2024-11-20 16:20:21.691375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 16:20:21.691556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.691590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.691793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.691826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.691945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.691992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.692253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.692395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.692427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.692620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.692651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.692828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.692860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.693073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.693107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.693218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.693250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.693382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.693414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.693677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.693709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.693895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.693928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.694120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.694153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.694359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.694391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.694654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.694686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.694817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.694849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.694985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.695025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.695199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.695231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.695468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.695499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.695673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.695705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.695837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.695869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.696130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.696163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.696282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.696314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.696484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.696517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.696706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.696739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.697024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.697065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.697254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.697286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.697407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.697439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.697628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.697673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.697974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.698162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.698194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.698326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.698357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.698576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.698607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.698842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.698873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.699067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.699104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.699342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.699471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.699504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.699744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.699775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.699971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.700020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.700144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.700176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.700412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.700443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.700629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.700662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 16:20:21.700870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.939 [2024-11-20 16:20:21.700901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 16:20:21.701103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.940 [2024-11-20 16:20:21.701137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 16:20:21.701267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.940 [2024-11-20 16:20:21.701301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 16:20:21.701506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.940 [2024-11-20 16:20:21.701550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 16:20:21.701853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.940 [2024-11-20 16:20:21.701886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:20.940 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.702164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.702199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.702320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.702352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.702470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.702501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.702694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.702725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.702843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.702874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.702983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.703018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.703154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.703198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.703458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.703505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.703736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.703784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.704092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.704149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.704477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.704524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.704682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.704725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.704995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.705043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.705284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.262 [2024-11-20 16:20:21.705321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.262 qpair failed and we were unable to recover it. 00:27:21.262 [2024-11-20 16:20:21.705545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.705590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.705826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.705873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.706032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.706076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.706299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.706344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.706561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.706608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.706818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.706865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.707027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.707074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.707291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.707338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.707559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.707607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.707779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.707824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.708148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.708188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.708370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.708403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.708672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.708704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.708908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.708941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.709143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.709176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.709297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.709328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.709514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.709545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.709729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.709762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.709934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.710003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.710189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.710219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.710401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.710433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.710550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.710583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.710710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.710742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.710847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.710879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.711002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.711036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.711261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.711439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.711471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.711656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.711688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.711900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.711931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.712193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.712227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.712352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.712383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.712564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.712596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.712786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.712817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.712922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.712970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.713161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.713195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.713391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.713434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.713625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.713657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.713788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.713821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.713958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.713992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.714175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.714207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.714310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.714342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.714469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.714502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.714705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.714736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.714911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.714944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.715091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.715122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.715241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.715271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.715396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.715428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.715556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.715589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.715717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.715748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.715927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.715971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.716092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.716124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.716293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.716325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.716455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.716487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.716597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.716629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.716808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.716840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.263 qpair failed and we were unable to recover it. 00:27:21.263 [2024-11-20 16:20:21.716964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.263 [2024-11-20 16:20:21.717007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.717208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.717241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.717354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.717386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.717499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.717530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.717778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.717809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.717923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.717978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.718163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.718195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.718397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.718429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.718552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.718583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.718711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.718742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.718859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.718890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.719072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.719107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.719294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.719326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.719454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.719485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.719664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.719696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.719882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.719913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.720037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.720070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.720196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.720227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.720420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.720452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.720628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.720661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.720841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.720877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.721053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.721087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.721207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.721239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.721356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.721386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.721492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.721523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.721635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.721667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.721842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.721872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.722136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.722171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.722307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.722339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.722470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.722501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.722615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.722646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.722834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.722866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.723053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.723086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.723276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.723308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.723450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.723482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.723597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.723628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.723731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.723763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.724930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.724992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.725112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.725144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.725321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.725353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.725469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.725501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.725646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.725800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.725831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.725937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.725983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.726106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.726138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.726263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.726295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.726404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.726435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.726546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.726579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.726755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.726786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.726915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.726959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.727141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.727174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.727289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.264 [2024-11-20 16:20:21.727321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.264 qpair failed and we were unable to recover it. 00:27:21.264 [2024-11-20 16:20:21.727493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.727524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.727717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.727749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.727858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.727895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.728069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.728217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.728365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.728506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.728653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.728861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.728989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.729023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.729312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.729342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.729475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.729508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.729626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.729658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.729839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.729870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.730113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.730146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.730262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.730296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.730418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.730450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.730708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.730919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.730959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.731074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.731107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.731227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.731260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.731445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.731475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.731594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.731625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.731756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.731788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.731968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.732002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.732145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.732176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.732297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.732328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.732458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.732490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.732681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.732713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.732890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.732929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.733059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.733090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.733208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.733239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.733438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.733472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.733581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.733613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.733725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.733759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.733873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.733906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.734058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.734090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.734263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.734295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.734488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.734519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.734625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.734656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.734823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.734855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.265 [2024-11-20 16:20:21.734969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.265 [2024-11-20 16:20:21.735002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.265 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.735187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.735217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.735356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.735390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.735560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.735592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.735725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.735756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.735866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.735899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.736101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.736133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.736391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.736422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.736538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.736570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.736696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.736728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.736911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.736970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.737214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.737245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.737428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.737460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.737652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.737684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.737933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.737971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.738154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.738187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.738396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.738427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.738557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.738592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.738793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.738825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.738945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.738997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.739234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.739265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.739369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.739400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.739587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.739619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.739830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.739862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.739976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.740010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.740130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.740161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.740273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.740305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.740505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.740536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.740714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.740752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.740959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.740992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.741126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.741158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.741350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.741381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.741492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.741522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.741658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.741690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.741882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.741913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.742164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.742198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.742309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.742341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.742449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.742480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.742679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.742711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.742815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.742846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.742966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.743118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.743149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.743275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.743307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.743416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.743447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.743573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.743603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.743810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.743842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.744023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.744056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.744161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.744194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.744368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.744400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.744521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.744551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.744653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.744684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.744880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.744911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.745108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.745143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.745261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.745294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.266 [2024-11-20 16:20:21.745476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.266 [2024-11-20 16:20:21.745507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.266 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.745635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.745669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.745851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.745881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.748103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.748163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.748385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.748421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.748628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.748661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.748854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.748884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.749042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.749076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.749282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.749313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.749492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.749523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.749705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.749737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.749859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.749889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.750088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.750120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.750228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.750259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.750497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.750538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.750728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.750759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.750921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.751114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.751146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.751329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.751360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.751530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.751562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.751667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.751699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.751969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.752003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.752184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.752215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.752342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.752373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.752562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.752593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.752812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.752843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.753035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.753068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.753188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.753219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.753418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.753451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.753731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.753777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.753896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.753924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.754038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.754075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.754180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.754210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.754433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.754640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.754674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.754931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.754971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.755151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.755180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.755284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.755312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.755420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.755451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.755696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.755733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.755836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.755865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.755991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.756023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.756283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.756313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.756418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.756446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.756639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.756674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.756805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.756834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.756930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.756967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.757096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.757124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.757247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.757276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.757469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.757499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.757733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.757761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.757931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.757974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.758085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.758114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.758278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.758309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.758412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.758448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.758560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.758590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.758758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.758788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.267 [2024-11-20 16:20:21.759075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.267 [2024-11-20 16:20:21.759108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.267 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.759276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.759307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.759413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.759441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.759609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.759638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.759905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.759935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.760054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.760085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.760190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.760219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.760359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.760532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.760561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.760797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.760828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.761020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.761053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.761162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.761191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.761306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.761335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.761563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.761596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.761775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.761808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.761940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.761987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.762168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.762196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.762309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.762338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.762442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.762470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.762669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.762697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.762798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.762825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.763033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.763062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.763255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.763453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.763481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.763650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.763679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.763938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.763991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.764173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.764205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.764313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.764345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.764602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.764633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.764773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.764804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.764923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.764966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.765088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.765119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.765231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.765261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.765394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.765424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.765682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.765713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.765896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.765926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.766148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.766179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.766295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.766333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.766517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.766548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.766727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.766932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.766975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.767240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.767271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.767466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.767497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.767675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.767707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.767815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.767848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.768083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.768116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.768219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.768251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.768432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.768464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.768651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.768682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.768805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.768836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.769007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.769040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.769177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.769208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.769378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.769408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.769582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.769613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.769719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.769750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.769928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.770073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.268 [2024-11-20 16:20:21.770104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.268 qpair failed and we were unable to recover it. 00:27:21.268 [2024-11-20 16:20:21.770216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.770247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.770419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.770450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.770652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.770778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.770809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.770998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.771032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.771232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.771370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.771401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.771509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.771541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.771788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.771819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.772063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.772096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.772214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.772245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.772365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.772395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.772720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.772752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.772864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.772896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.773166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.773198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.773307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.773338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.773447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.773479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.773604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.773635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.773876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.773908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.774108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.774141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.774249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.774285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.774420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.774452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.774648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.774680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.774884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.774914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.775108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.775141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.775316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.775350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.775536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.775569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.775777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.775813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.775934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.775982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.776160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.776191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.776651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.776687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.776877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.776915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.777104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.777139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.777360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.777391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.777527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.777574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.777775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.777806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.777974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.778006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.778107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.778139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.778311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.778341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.778457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.778489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.778739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.778770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.778987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.779021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.779209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.779240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.779429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.779461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.779590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.779622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.779890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.779925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.780102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.780134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.780263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.269 [2024-11-20 16:20:21.780295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.269 qpair failed and we were unable to recover it. 00:27:21.269 [2024-11-20 16:20:21.780475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.780506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.780604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.780636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.780840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.780871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.781041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.781074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.781188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.781218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.781427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.781457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.781631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.781665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.781834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.781865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.782006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.782041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.782260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.782292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.782483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.782512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.782753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.782790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.782967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.783000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.783107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.783137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.783257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.783291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.783533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.783565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.783761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.783792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.783980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.784012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.784127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.784158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.784398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.784430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.784542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.784573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.784673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.784710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.784826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.784858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.785120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.785153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.785339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.785371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.785506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.785538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.785652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.785684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.785874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.785904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.786035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.786068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.786258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.786290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.786529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.786559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.786769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.786801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.786908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.786940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.787132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.787164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.787359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.787391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.787494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.787525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.787649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.787681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.787858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.787889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.788062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.788096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.788205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.788236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.788500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.788532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.788641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.788672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.788932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.788976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.789152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.789185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.789292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.789325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.789520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.789550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.789735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.789765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.789961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.789994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.790250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.790285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.790460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.790492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.790682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.790715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.790918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.790966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.791153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.791185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.791373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.791404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.791529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.791562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.791691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.791722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.791905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.270 [2024-11-20 16:20:21.791937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.270 qpair failed and we were unable to recover it. 00:27:21.270 [2024-11-20 16:20:21.792172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.792205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.792328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.792360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.792531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.792563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.792755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.792788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.792964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.792997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.793129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.793159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.793366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.793399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.793594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.793625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.793875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.793907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.794152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.794185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.794370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.794401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.794582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.794614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.794844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.794876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.795051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.795083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.795288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.795319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.795433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.795465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.795654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.795686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.795860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.795892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.796031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.796064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.796243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.796275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.796457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.796487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.796752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.796783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.797036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.797070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.797249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.797280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.797477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.797509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.797768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.797799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.797993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.798200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.798232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.798440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.798472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.798650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.798681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.798801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.798832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.798946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.798989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.799161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.799194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.799363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.799395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.799520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.799557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.799732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.799764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.800019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.800054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.800230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.800262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.800514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.800544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.800735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.800766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.800869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.800899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.801009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.801041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.801231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.801261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.801440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.801473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.801593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.801625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.801726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.801758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.801994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.802028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.802158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.802190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.802388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.802420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.802688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.802904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.803042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.803075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.803315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.803348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.803536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.803568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.803762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.803794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.803919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.803961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.804146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.804179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.271 [2024-11-20 16:20:21.804310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.271 [2024-11-20 16:20:21.804342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.271 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.804444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.804476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.804606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.804638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.804811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.804842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.805025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.805058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.805183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.805215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.805351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.805383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.805511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.805543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.805658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.805860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.805892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.806082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.806114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.806352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.806383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.806593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.806626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.806818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.806850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.806965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.806998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.807186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.807217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.807390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.807423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.807620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.807657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.807790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.807821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.808084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.808116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.808300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.808331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.808593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.808626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.808744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.808774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.808912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.808943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.809137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.809170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.809320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.809531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.809563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.809751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.809782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.809890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.809922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.810240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.810274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.810531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.810564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.810754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.810787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.810916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.810959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.811074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.811106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.811344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.811376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.811656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.811687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.811865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.811897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.812102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.812136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.812324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.812356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.812543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.812576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.812770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.813040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.813075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.813199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.813231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.813513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.813544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.813674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.813705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.813820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.813852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.813981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.814014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.814202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.814232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.272 [2024-11-20 16:20:21.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.272 [2024-11-20 16:20:21.814439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.272 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.814627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.814658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.814833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.814866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.814971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.815004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.815249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.815282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.815393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.815424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.815534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.815564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.815746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.815777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.815967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.816001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.816245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.816283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.816478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.816510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.816630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.816661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.816923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.816966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.817146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.817177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.817283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.817314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.817524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.817661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.817694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.817828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.817858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.818120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.818154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.818418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.818450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.818706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.818738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.819033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.819067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.819173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.819205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.819382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.819414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.819537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.819569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.819691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.819722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.819827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.819858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.820032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.820065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.820189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.820222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.820339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.820372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.820637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.820667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.820804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.820836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.820967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.821000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.821120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.821152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.821339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.821371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.821545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.821582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.821719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.821753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.821928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.821996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.822263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.822296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.822413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.822446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.822582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.822614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.822785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.822818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.822937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.822982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.823089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.823124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.823316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.823348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.823538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.823572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.823826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.823858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.824054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.824086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.824267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.824311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.824500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.824549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.824719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.824751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.824941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.824983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.825164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.825196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.825404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.825435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.825630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.825664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.825842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.825874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.826058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.826091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.826273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.826303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.273 [2024-11-20 16:20:21.826424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.273 [2024-11-20 16:20:21.826455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.273 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.826691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.826723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.826848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.826879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.827076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.827109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.827243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.827275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.827512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.827544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.827676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.827709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.827879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.827909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.828130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.828163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.828338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.828369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.828541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.828572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.828706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.828737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.828866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.828897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.829104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.829136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.829320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.829352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.829586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.829617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.829756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.829787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.830000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.830033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.830219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.830251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.830491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.830523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.830642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.830673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.830866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.830897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.831077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.831111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.831537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.831570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.831697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.831729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.831945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.831985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.832175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.832206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.832398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.832429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.832551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.832582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.832788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.832820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.833076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.833115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.833228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.833259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.833444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.833475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.833582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.833614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.833822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.833853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.834044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.834077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.834360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.834392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.834607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.834638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.834899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.834929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.835198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.835230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.835418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.835449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.835633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.835665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.835851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.835882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.836063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.836096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.836279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.836311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.836548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.836578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.836749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.836781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.837063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.837177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.837207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.837341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.837371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.837610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.837640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.837759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.837789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.837991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.838024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.838218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.838250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.838450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.838480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.838646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.838676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.274 qpair failed and we were unable to recover it. 00:27:21.274 [2024-11-20 16:20:21.838864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.274 [2024-11-20 16:20:21.838894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.839093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.839130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.839311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.839342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.839613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.839644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.839813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.839844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.840018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.840051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.840293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.840324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.840432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.840462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.840572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.840603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.840791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.840822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.841055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.841087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.841204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.841235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.841420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.841450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.841577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.841609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.841797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.841828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.841936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.841980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.842174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.842205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.842315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.842345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.842453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.842485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.842606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.842636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.842829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.842860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.842994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.843039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.843164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.843196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.843403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.843556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.843587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.843718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.843749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.843926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.843964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.844139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.844177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.844323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.844354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.844490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.844521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.844762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.844792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.844966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.844998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.845115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.845148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.845387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.845420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.845621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.845652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.845777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.845808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.846048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.846080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.846336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.846366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.846474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.846504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.846716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.846746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.846874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.846906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.847090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.847130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.847315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.847346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.847530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.847561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.847736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.847767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.847962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.847995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.848125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.848157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.848423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.848457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.848642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.848673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.848858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.848889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.849168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.849201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.849408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.849438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.849566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.275 [2024-11-20 16:20:21.849598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.275 qpair failed and we were unable to recover it. 00:27:21.275 [2024-11-20 16:20:21.849712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.849744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.849869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.849899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.850048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.850082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.850268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.850298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.850573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.850605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.850715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.850746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.850881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.850911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.851117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.851151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.851277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.851308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.851479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.851510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.851730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.851762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.851969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.852003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.852218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.852251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.852430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.852461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.852591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.852625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.852959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.852993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.853165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.853331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.853361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.853599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.853629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.853887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.853917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.854079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.854112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.854332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.854370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.854502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.854534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.854706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.854736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.854939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.854983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.855257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.855362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.855393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.855592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.855623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.855741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.855778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.855967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.856001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.856118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.856148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.856325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.856356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.856541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.856572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.856764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.856795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.856969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.857002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.857193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.857224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.857412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.857443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.857649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.857679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.857855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.857886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.858149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.858183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.858381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.858412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.858673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.858704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.858832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.858864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.859049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.859215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.859247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.859381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.859413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.859588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.859620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.859857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.859890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.860031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.860063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.860241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.860273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.860541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.860777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.860809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.860934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.860976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.861149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.861181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.861350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.861382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.861534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.861566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.861685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.276 [2024-11-20 16:20:21.861717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.276 qpair failed and we were unable to recover it. 00:27:21.276 [2024-11-20 16:20:21.861832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.861863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.862055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.862089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.862214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.862244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.862367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.862399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.862589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.862621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.862807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.862838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.862959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.862991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.863114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.863146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.863338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.863370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.863649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.863680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.863912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.863944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.864082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.864120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.864231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.864263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.864380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.864411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.864531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.864562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.864757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.864789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.864965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.864999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.865180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.865212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.865326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.865357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.865461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.865492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.865620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.865652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.865843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.865874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.866045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.866078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.866211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.866243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.866366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.866404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.866599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.866631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.866757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.866789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.866910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.866943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.867225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.867256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.867454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.867485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.867678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.867709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.867972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.868006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.868138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.868170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.868408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.868609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.868639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.868822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.868853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.869037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.869069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.869181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.869212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.869341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.869373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.869546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.869578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.869775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.869806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.870092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.870127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.870312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.870343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.870464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.870494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.870668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.870699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.870936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.870976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.871215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.871247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.871427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.871459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.871698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.871729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.871856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.871888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.872094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.872127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.872324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.872366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.872552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.872591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.872723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.872756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.872868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.872900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.873146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.873181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.873311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.277 [2024-11-20 16:20:21.873343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.277 qpair failed and we were unable to recover it. 00:27:21.277 [2024-11-20 16:20:21.873512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.873543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.873675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.873705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.873901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.873935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.874156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.874191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.874390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.874422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.874605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.874639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.874809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.874840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.874973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.875006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.875147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.875178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.875301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.875334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.875573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.875606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.875728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.875763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.876012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.876048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.876242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.876276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.876398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.876429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.876630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.876662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.876897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.876928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.877126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.877165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.877359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.877394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.877526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.877560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.877757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.877789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.877916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.877971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.878217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.878249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.878378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.878409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.878649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.878682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.878878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.878909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.879171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.879204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.879400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.879432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.879625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.879658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.879839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.879871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.880046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.880079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.880193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.880224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.880348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.880380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.880650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.880684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.880867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.880905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.881101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.881133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.881253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.881285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.881409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.881442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.881689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.881721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.881835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.881867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.881988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.882021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.882147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.882179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.882366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.882398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.882527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.882771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.882803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.883006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.883039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.883226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.883257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.883400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.883432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.883577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.883609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.883794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.883825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.884040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.278 [2024-11-20 16:20:21.884073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.278 qpair failed and we were unable to recover it. 00:27:21.278 [2024-11-20 16:20:21.884259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.884290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.884489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.884521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.884728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.884759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.884881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.884912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.885035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.885068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.885247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.885278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.885566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.885597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.885781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.885813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.886083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.886233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.886389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.886523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.886664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.886800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.886986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.887018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.887276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.887307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.887489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.887520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.887711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.887742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.887925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.887968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.888082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.888114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.888307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.888339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.888624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.888655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.888831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.888863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.888987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.889026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.889265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.889298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.889551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.889582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.889786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.889818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.889967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.890001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.890150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.890182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.890316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.890346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.890584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.890615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.890793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.890825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.890958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.890992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.891215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.891247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.891443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.891475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.891712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.891744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.891876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.891908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.892104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.892136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.892373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.892404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.892525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.892556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.892762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.892793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.892919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.892958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.893166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.893199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.893372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.893402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.893649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.893682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.893803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.893836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.894009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.894043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.894204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.894238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.894411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.894444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.894569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.894601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.894782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.894814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.894941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.894982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.895228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.895260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.895446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.895478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.895658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.895802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.895835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.279 qpair failed and we were unable to recover it. 00:27:21.279 [2024-11-20 16:20:21.896019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.279 [2024-11-20 16:20:21.896051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.896227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.896258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.896448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.896480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.896757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.896788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.896984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.897017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.897208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.897239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.897379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.897410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.897610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.897648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.897848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.898069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.898104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.898361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.898394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.898587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.898619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.898730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.898762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.898933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.898975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.899145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.899178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.899301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.899334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.899514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.899547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.899720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.899753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.899991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.900024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.900285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.900317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.900505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.900537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.900683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.900714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.900830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.901057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.901089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.901296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.901328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.901567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.901598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.901719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.901749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.901873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.901905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.902097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.902129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.902303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.902334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.902525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.902561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.902691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.902724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.902935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.902978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.903158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.903189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.903407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.903581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.903613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.903738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.903769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.903945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.904110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.904142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.904320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.904352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.904617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.904650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.904844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.904875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.905045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.905079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.905203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.905236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.905446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.905587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.905620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.905802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.905835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.906033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.906071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.906254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.906286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.906411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.906443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.906616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.906647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.906924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.906966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.907148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.907180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.907355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.907386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.907575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.907608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.907740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.907773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.907888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.280 [2024-11-20 16:20:21.907920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.280 qpair failed and we were unable to recover it. 00:27:21.280 [2024-11-20 16:20:21.908116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.908150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.908271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.908303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.908423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.908456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.908694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.908726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.908855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.908888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.909175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.909208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.909391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.909423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.909529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.909561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.909750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.909781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.910050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.910084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.910208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.910242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.910366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.910399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.910510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.910542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.910711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.910743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.910861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.910893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.911074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.911107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.911315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.911347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.911602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.911676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.911992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.912031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.912225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.912259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.912444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.912476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.912716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.912750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.913028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.913065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.913464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.913683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.913714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.913982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.914014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.914211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.914242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.914365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.914396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.914573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.914605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.914802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.914833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.914960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.914993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.915119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.915151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.915333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.915365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.915601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.915632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.915815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.915845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.915980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.916013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.916213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.916243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.916439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.916468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.916682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.916849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.916880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.917051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.917083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.917343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.917373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.917572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.917603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.917715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.917746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.918003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.918041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.918158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.918189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.918362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.918392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.918592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.918624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.918744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.918776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.918955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.281 [2024-11-20 16:20:21.918986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.281 qpair failed and we were unable to recover it. 00:27:21.281 [2024-11-20 16:20:21.919184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.919215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.919521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.919552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.919683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.919713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.919957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.919990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.920224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.920256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.920429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.920461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.920635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.920667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.920791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.920822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.921039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.921222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.921254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.921427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.921458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.921667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.921771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.921803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.921920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.921958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.922172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.922203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.922330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.922361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.922600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.922631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.922840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.922871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.922973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.923005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.923178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.923208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.923456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.923487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.923680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.923718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.923901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.923932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.924147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.924179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.924368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.924399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.924597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.924628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.924748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.924780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.924970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.925004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.925183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.925214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.925396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.925427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.925607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.925638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.925751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.925782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.925898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.925930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.926064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.926095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.926266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.926296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.926555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.926588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.926773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.926804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.926996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.927029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.927221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.927252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.927438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.927468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.927648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.927679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.927860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.927890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.928082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.928115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.928316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.928347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.928530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.928561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.928663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.928694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.928929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.928976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.929106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.929138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.929253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.929290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.929536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.929567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.929687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.929718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.929898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.929929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.930060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.930092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.930294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.930325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.930448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.930479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.930590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.930621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.930795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.930826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.930968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.931002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.282 qpair failed and we were unable to recover it. 00:27:21.282 [2024-11-20 16:20:21.931174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.282 [2024-11-20 16:20:21.931204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.931335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.931365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.931601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.931633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.931813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.931844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.931972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.932005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.932109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.932307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.932339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.932578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.932609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.932735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.932765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.933028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.933061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.933193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.933224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.933348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.933379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.933616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.933647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.933855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.933885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.934079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.934112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.934247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.934277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.934401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.934432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.934667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.934699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.934883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.934913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.935200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.935233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.935441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.935473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.935651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.935682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.935878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.935908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.936124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.936157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.936269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.936300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.936414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.936444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.936702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.936733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.936972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.937005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.937194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.937225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.937406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.937437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.937676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.937707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.937994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.938027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.938152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.938305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.938336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.938505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.938536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.938708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.938739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.938942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.938992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.939162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.939194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.939396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.939426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.939524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.939554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.939763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.939795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.940032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.940064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.940245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.940275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.940396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.940428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.940606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.940637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.940827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.940858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.940994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.941028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.941144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.941175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.941297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.941328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.941498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.941529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.941701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.941732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.941920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.941958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.942147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.942177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.942345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.942377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.942509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.942540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.942645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.942675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.942922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.942961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.943080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.283 [2024-11-20 16:20:21.943111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.283 qpair failed and we were unable to recover it. 00:27:21.283 [2024-11-20 16:20:21.943372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.943409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.943691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.943722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.943893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.943924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.944070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.944255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.944286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.944458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.944488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.944664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.944695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.944862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.944893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.945113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.945145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.945329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.945359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.945563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.945595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.945765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.945795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.945969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.946002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.946184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.946215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.946406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.946438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.946638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.946670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.946842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.946873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.947056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.947088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.947327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.947357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.947526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.947556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.947748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.947778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.947883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.947913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.948101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.948133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.948272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.948302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.948510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.948541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.948655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.948686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.948945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.948995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.949100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.949137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.949429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.949645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.949675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.949874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.949905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.950113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.950146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.950258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.950289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.950454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.950485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.950658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.950689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.950809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.950839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.951039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.951072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.951242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.951273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.951476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.951508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.951693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.951725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.951906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.951936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.952130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.952162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.952337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.952369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.952611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.952641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.952817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.952848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.953030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.953063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.953242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.953272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.953461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.953491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.953662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.953693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.953945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.284 [2024-11-20 16:20:21.954005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.284 qpair failed and we were unable to recover it. 00:27:21.284 [2024-11-20 16:20:21.954175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.954207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.954307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.954338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.954509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.954540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.954731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.954762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.954874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.955060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.955092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.955356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.955388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.955515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.955545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.955722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.955754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.955864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.955894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.956106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.956138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.956328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.956359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.956529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.956560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.956825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.957009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.957042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.957338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.957369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.957644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.957675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.957855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.957886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.958030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.958062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.958323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.958353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.958542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.958574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.958756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.958787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.958919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.958960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.959196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.959228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.959409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.959441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.959562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.959799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.959830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.959999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.960032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.960302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.960333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.960515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.960546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.960737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.960768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.960901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.960932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.961186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.961219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.961399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.961431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.961602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.961634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.961818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.961850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.962024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.962056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.962240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.962271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.962440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.962471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.962660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.962693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.962874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.962905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.963040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.963071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.963271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.963303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.963501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.963532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.963741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.963773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.964030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.964069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.964248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.964279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.964467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.964498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.964671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.964701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.964882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.964913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.965115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.965146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.965334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.965365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.965549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.965580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.965691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.965722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.965901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.965932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.285 [2024-11-20 16:20:21.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.285 [2024-11-20 16:20:21.966151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.285 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.966272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.966302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.966424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.966455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.966652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.966685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.966814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.966845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.967021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.967054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.967159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.967190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.967425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.967458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.967663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.967694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.967867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.967897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.968016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.968049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.968288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.968318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.968421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.968453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.968690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.968722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.968827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.968857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.969024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.969058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.969228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.969258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.969383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.969420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.969577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.969608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.969728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.969759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.969963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.969996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.970287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.970318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.970508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.970540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.970776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.970807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.970995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.971029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.971205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.971237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.971433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.971464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.971592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.971622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.971794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.971825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.971956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.971988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.972114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.972145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.972270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.972302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.972414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.972445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.972689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.972722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.972853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.972883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.973125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.973157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.973280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.973311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.973515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.973547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.973668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.973699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.973811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.973842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.974020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.974052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.974290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.974444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.974476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.974723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.974754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.974870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.974909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.975120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.975152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.975290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.975322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.975502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.975532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.975638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.975669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.975772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.975804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.975996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.976141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.976171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.976387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.976419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.976527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.976557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.976693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.976724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.976840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.976871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.977013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.286 [2024-11-20 16:20:21.977044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.286 qpair failed and we were unable to recover it. 00:27:21.286 [2024-11-20 16:20:21.977309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.977341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.977534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.977567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.977685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.977716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.977921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.977962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.978142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.978173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.978359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.978390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.978516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.978547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.978723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.978753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.978860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.978891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.979003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.979035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.979222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.979253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.979446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.979476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.979715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.979747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.979871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.979901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.980026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.980058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.980181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.980212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.980384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.980414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.980519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.980549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.980668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.980699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.980877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.980908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.981235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.981267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.981400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.981432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.981536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.981566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.981816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.981847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.981978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.982126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.982269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.982477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.982614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.982919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.982959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.983066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.983097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.983228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.983259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.983364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.983394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.983579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.983610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.983854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.983884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.984114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.984148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.984266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.984297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.984511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.984542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.984736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.984765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.984890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.984920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.985127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.985158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.985427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.985460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.985665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.985696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.985873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.985904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.986037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.986076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.986194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.986224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.986356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.986386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.986581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.986612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.986820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.986850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.987059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.287 [2024-11-20 16:20:21.987090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.287 qpair failed and we were unable to recover it. 00:27:21.287 [2024-11-20 16:20:21.987226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.987258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.987438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.987470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.987645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.987676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.987786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.987816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.988000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.988038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.988176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.988207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.988413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.988444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.988617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.988818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.988966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.988999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.989106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.989137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.989268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.989298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.989536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.989567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.989701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.989731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.989971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.990004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.990220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.990252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.990372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.990403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.990642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.990673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.990935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.990976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.991113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.991144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.991333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.991363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.991609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.991640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.991812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.991842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.992067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.992100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.992283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.992314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.992500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.992530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.992709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.992740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.992857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.992888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.993157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.993190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.993364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.993395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.993523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.993553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.993673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.993709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.993825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.993856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.994089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.994121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.994299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.994330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.994502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.994533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.994642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.994674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.994894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.994925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.995104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.995136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.995239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.995270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.995516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.995546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.995678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.995708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.995998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.996031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.996220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.996251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.996361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.996392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.996511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.996544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.996712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.996744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.996919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.996956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.997092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.997124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.997240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.997272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.997398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.997429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.997561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.997591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.997694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.997726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.997852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.997883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.998151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.998184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.998366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.998398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.998636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.998667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.998925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.288 [2024-11-20 16:20:21.998964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.288 qpair failed and we were unable to recover it. 00:27:21.288 [2024-11-20 16:20:21.999153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:21.999184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:21.999307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:21.999338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:21.999453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:21.999486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:21.999599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:21.999630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:21.999802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:21.999833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.000028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.000061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.000246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.000277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.000463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.000494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.000620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.000650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.000846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.000878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.000998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.001030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.001209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.001240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.001345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.001375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.001548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.001578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.001764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.001971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.002004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.002123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.002155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.002328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.002359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.002486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.002517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.002704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.002736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.002853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.002884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.003068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.003102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.003281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.003313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.003546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.003692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.003723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.003930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.003973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.004089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.004120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.004235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.004267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.004384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.004415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.004607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.004637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.004805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.004835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.005009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.005042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.005183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.005215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.005442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.005472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.005655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.005686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.005863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.005895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.006020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.006052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.006318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.006349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.006627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.006657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.006825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.006856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.006988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.007020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.007208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.007246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.007358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.007389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.007637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.007668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.007855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.007886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.008021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.008054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.008168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.008199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.008309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.008341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.008452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.008484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.008662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.008694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.008874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.008905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.009025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.009058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.009229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.009497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.009528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.009723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.009754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.009934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.009978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.010161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.289 [2024-11-20 16:20:22.010193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.289 qpair failed and we were unable to recover it. 00:27:21.289 [2024-11-20 16:20:22.010459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.010491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.010704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.010734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.010959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.010992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.011135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.011166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.011278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.011309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.011547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.011578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.011696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.011727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.011834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.011866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.012071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.012103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.012305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.012436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.012467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.012735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.012771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.012943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.013010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.013119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.013151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.013425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.013548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.013579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.013682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.013713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.013885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.013916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.014030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.014061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.014250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.014282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.014449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.014480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.014654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.014685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.014867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.015074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.015105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.015285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.015316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.015519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.015551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.015758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.015789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.016033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.016066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.016250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.016282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.016454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.016485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.016593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.016625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.016842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.016873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.016989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.017022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.017141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.017173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.017356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.017388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.017510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.017540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.017777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.017808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.017998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.018031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.018150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.018186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.018289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.018320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.018468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.018706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.018737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.018916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.018946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.019072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.019103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.019227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.019259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.019446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.019477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.019595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.019626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.019804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.019836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.020018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.020051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.020177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.020208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.020341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.020371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.020477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.020508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.020711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.020742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.020916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.020967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.021235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.021266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.021387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.290 [2024-11-20 16:20:22.021418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.290 qpair failed and we were unable to recover it. 00:27:21.290 [2024-11-20 16:20:22.021535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.021566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.021823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.021854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.021965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.021998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.022171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.022203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.022324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.022354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.022480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.022512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.022799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.022830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.023018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.023050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.023269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.023301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.023488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.023519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.023706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.023738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.023851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.023883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.024080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.024113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.024346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.024377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.024527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.024659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.024690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.024862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.025031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.025063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.025323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.025354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.025476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.025507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.025747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.025777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.026041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.026073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.026267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.026298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.026548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.026580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.026760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.026792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.026925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.026964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.027083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.027114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.027351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.027381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.027560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.027590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.027760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.027792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.027995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.028027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.028207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.028238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.028455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.028487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.028659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.028690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.028879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.028911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.029132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.029165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.029348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.029382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.029509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.029541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.029778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.029810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.029935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.029975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.030180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.030211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.030454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.030485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.030603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.030635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.030820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.030851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.031096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.031128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.031390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.031421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.031590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.031621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.031809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.031841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.032080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.032113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.032290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.032428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.032465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.032629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.032660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.032770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.032800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.032914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.032945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.033129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.033161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.033275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.033307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.033418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.033450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.033557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.033588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.033784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.033815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.291 [2024-11-20 16:20:22.033932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.291 [2024-11-20 16:20:22.033971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.291 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.034087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.034118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.034303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.034335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.034448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.034478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.034647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.034679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.034942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.034983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.035157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.035188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.035451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.035483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.035688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.035720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.036035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.036069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.036190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.036221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.036354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.036385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.036563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.036595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.036768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.036799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.579 [2024-11-20 16:20:22.036984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.579 [2024-11-20 16:20:22.037017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.579 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.037146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.037394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.037427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.037622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.037657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.037856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.037899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.038098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.038131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.038349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.038381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.038571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.038602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.038720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.038752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.038920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.038970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.039092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.039124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.039239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.039270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.039385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.039416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.039587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.039619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.039803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.039833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.040012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.040046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.040172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.040204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.040380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.040410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.040653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.040684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.040812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.040844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.041038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.041071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.041338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.041369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.041572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.041604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.041892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.041924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.042121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.042152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.042390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.042585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.042617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.042877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.042907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.043110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.043143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.043353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.043384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.043631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.043661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.043921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.043963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.044151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.044184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.044385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.044417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.044584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.044615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.044867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.044899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.045030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.045062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.045319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.045350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.580 [2024-11-20 16:20:22.045522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.580 [2024-11-20 16:20:22.045553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.580 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.045721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.045752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.045936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.045979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.046173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.046204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.046337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.046369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.046553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.046583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.046794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.046826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.047088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.047122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.047227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.047258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.047502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.047533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.047655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.047687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.047807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.047839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.048078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.048111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.048313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.048579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.048610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.048742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.048772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.048878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.048910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.049038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.049071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.049256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.049287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.049404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.049435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.049674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.049706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.049945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.049986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.050223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.050254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.050372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.050402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.050598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.050630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.050803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.050833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.051001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.051034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.051155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.051186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.051316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.051348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.051584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.051615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.051834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.051865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.051986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.052019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.052189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.052221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.052348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.052379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.052551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.052588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.052826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.052857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.053056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.053089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.053267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.053298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.581 [2024-11-20 16:20:22.053478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.581 [2024-11-20 16:20:22.053508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.581 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.053701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.053732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.053912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.053944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.054121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.054152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.054282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.054312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.054550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.054582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.054684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.054715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.054831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.054861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.055033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.055066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.055253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.055556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.055587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.055718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.055750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.055953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.056249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.056280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.056452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.056484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.056657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.056688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.056804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.056837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.057012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.057045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.057179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.057211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.057345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.057376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.057549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.057579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.057784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.057815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.058015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.058050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.058292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.058329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.058526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.058558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.058742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.058774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.058956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.058988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.059120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.059153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.059259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.059290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.059483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.059515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.059648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.059678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.059912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.060138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.060170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.060344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.060375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.060505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.060537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.060789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.060820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.061049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.061084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.061281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.061314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.061498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.582 [2024-11-20 16:20:22.061531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.582 qpair failed and we were unable to recover it. 00:27:21.582 [2024-11-20 16:20:22.061764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.061795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.062029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.062061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.062191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.062223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.062410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.062442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.062632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.062664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.062856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.062887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.063099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.063132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.063254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.063287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.063468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.063500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.063676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.063707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.063897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.063929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.064198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.064237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.064357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.064390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.064612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.064643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.064773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.064805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.064935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.064996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.065124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.065306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.065338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.065523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.065555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.065740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.065771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.065979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.066012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.066134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.066167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.066277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.066308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.066511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.066542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.066729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.066761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.066958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.066991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.067108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.067139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.067319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.067352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.067528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.067558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.067673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.067706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.067883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.067914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.068027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.068059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.068187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.068218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.068427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.068459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.068712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.068743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.068932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.068981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.069164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.069196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.069459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.583 [2024-11-20 16:20:22.069490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.583 qpair failed and we were unable to recover it. 00:27:21.583 [2024-11-20 16:20:22.069677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.069708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.069891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.069923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.070120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.070151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.070334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.070366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.070487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.070520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.070781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.070812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.070924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.070971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.071099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.071247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.071277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.071465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.071496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.071744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.071775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.071888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.071920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.072103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.072136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.072256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.072287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.072531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.072741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.072776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.073051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.073088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.073216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.073250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.073500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.073532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.073667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.073700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.073827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.073859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.074120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.074153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.074394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.074425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.074554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.074584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.074761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.074793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.074912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.074942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.075067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.075099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.075213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.075278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.075455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.075487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.075656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.075687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.075862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.075895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.076035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.584 [2024-11-20 16:20:22.076067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.584 qpair failed and we were unable to recover it. 00:27:21.584 [2024-11-20 16:20:22.076240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.076272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.076458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.076489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.076691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.076723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.076896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.076926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.077114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.077146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.077340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.077371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.077500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.077531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.077780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.077811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.077928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.077968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.078166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.078198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.078468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.078594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.078626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.078809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.078841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.079033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.079067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.079309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.079341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.079596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.079628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.079830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.079862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.079981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.080013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.080195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.080228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.080349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.080380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.080572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.080605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.080872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.081072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.081237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.081269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.081516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.081547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.081682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.081713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.081890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.081922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.082169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.082200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.082400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.082431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.082632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.082665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.082824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.082968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.083003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.083175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.083206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.083386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.083417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.083531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.083564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.083806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.083837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.084027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.084060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.585 [2024-11-20 16:20:22.084190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.585 [2024-11-20 16:20:22.084221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.585 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.084349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.084382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.084563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.084594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.084775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.084806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.084919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.084956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.085090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.085121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.085361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.085392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.085563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.085595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.085800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.085986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.086020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.086193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.086225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.086398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.086429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.086633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.086664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.086927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.086964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.087089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.087121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.087308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.087584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.087616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.087812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.087843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.087968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.088001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.088118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.088149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.088321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.088352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.088550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.088582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.088822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.088853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.089046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.089079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.089261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.089294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.089467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.089509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.089778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.089810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.089934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.089973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.090190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.090223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.090464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.090494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.090707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.090738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.090964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.090997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.091276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.091307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.091480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.091511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.091626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.091657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.091787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.091818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.091992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.092025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.092194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.092225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.586 [2024-11-20 16:20:22.092440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.586 [2024-11-20 16:20:22.092471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.586 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.092670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.092701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.092883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.092915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.093063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.093096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.093224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.093255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.093543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.093574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.093759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.093790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.094049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.094082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.094206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.094238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.094438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.094468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.094707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.094739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.094926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.094968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.095155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.095185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.095358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.095389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.095532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.095564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.095750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.095781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.095973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.096006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.096185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.096217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.096365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.096490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.096520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.096701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.096730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.096909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.096939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.097186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.097216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.097460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.097490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.097609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.097639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.097822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.097853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.098054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.098086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.098302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.098340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.098457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.098487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.098616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.098646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.098847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.098876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.099062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.099094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.099298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.099329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.099522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.099552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.099668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.099697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.099890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.099921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.100114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.100143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.100317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.100348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.587 qpair failed and we were unable to recover it. 00:27:21.587 [2024-11-20 16:20:22.100528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.587 [2024-11-20 16:20:22.100560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.100731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.100762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.100967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.100998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.101127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.101158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.101267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.101297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.101537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.101568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.101679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.101709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.101845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.101875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.102067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.102119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.102316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.102345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.102467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.102497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.102739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.102770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.102939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.102978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.103154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.103191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.103317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.103348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.103453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.103485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.103702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.103733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.103924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.103964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.104136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.104167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.104404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.104435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.104688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.104720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.104903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.104935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.105231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.105263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.105393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.105426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.105533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.105566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.105765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.105796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.106057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.106090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.106327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.106360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.106612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.106642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.106907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.106945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.107196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.107229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.107483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.107515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.107690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.107722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.588 qpair failed and we were unable to recover it. 00:27:21.588 [2024-11-20 16:20:22.107918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.588 [2024-11-20 16:20:22.107956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.108219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.108251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.108375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.108406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.108560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.108591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.108797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.108829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.109002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.109035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.109223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.109254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.109522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.109555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.109673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.109705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.109941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.109990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.110193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.110225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.110403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.110436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.110618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.110651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.111172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.111206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.111453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.111484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.111712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.111743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.111987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.112191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.112224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.112494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.112527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.112641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.112672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.112909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.112942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.113233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.113265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.113395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.113427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.113598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.113631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.113752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.113784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.113974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.114007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.114173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.114205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.114411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.114442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.114703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.114734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.114973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.115006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.115198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.115229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.115426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.115458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.115576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.115607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.115857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.115888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.116204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.116239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.116502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.116539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.116730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.589 [2024-11-20 16:20:22.116761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-11-20 16:20:22.116898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.117133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.117166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.117426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.117457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.117745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.117777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.117904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.117935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.118233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.118266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.118447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.118782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.118813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.118930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.118973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.119238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.119544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.119576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.119712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.119743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.119870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.119902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.120167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.120199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.120372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.120402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.120518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.120548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.120724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.120756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.121040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.121074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.121263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.121295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.121532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.121564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.121848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.121879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.122069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.122101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.122375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.122407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.122626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.122657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.122928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.122968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.123226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.123259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.123495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.123525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.123707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.123975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.124007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.124290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.124322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.124567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.124597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.124790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.124821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.125013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.125046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.125166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.125414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.125445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.125709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.125740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.125916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.125958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.126202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.590 [2024-11-20 16:20:22.126233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-11-20 16:20:22.126420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.126457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.126673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.126705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.126970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.127003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.127245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.127277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.127448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.127487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.127752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.127784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.127914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.127959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.128157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.128192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.128426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.128456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.128627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.128659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.128895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.128928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.129225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.129260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.129544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.129576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.129760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.129792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.130092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.130327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.130359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.130604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.130636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.130760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.130792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.131056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.131089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.131331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.131363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.131603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.131635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.131851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.131882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.132087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.132121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.132372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.132403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.132575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.132607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.132729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.132760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.133008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.133041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.133232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.133264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.133384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.133416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.133664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.133697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.133885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.133917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.134188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.134222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.134421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.134453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.134639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.134671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.134798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.134829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.135007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.135041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.135298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.135331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.591 [2024-11-20 16:20:22.135550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.591 [2024-11-20 16:20:22.135808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.135838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.136084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.136118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.136326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.136369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.136631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.136662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.136780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.136812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.137138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.137172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.137355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.137386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.137621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.137654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.137839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.137871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.138110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.138143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.138360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.138598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.138630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.138813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.138846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.139035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.139068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.139254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.139287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.139523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.139555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.139776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.140064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.140096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.140312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.140344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.140610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.140640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.140930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.140991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.141233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.141264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.141563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.141594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.141856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.141887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.142068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.142102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.142361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.142392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.142629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.142660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.142830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.142861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.143089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.143122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.143316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.143349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.143591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.143622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.143861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.143894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.144156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.144189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.144322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.144353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.144595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.144627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.144838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.144869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.145132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-20 16:20:22.145164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.592 qpair failed and we were unable to recover it. 00:27:21.592 [2024-11-20 16:20:22.145403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.145434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.145699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.145731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.145978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.146011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.146207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.146238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.146505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.146539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.146745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.146782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.146891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.146922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.147200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.147234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.147499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.147532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.147715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.147747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.148014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.148048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.148262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.148294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.148534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.148565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.148804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.148835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.149098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.149130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.149314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.149346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.149609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.149640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.149793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.149824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.150015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.150048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.150178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.150210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.150468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.150799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.150831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.151091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.151123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.151430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.151461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.151713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.151743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.152023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.152056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.152314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.152346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.152611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.152641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.152930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.152970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.153255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.153287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.153556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.153586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.153825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.154124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.154157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.593 qpair failed and we were unable to recover it. 00:27:21.593 [2024-11-20 16:20:22.154445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-20 16:20:22.154476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.154748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.154780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.155099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.155372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.155403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.155607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.155639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.155809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.155841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.156026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.156058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.156346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.156378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.156652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.156683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.156879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.156910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.157187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.157220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.157393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.157424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.157635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.157673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.157939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.157979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.158196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.158228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.158472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.158504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.158806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.158838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.159111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.159144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.159314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.159345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.159610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.159641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.159924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.159965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.160158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.160190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.160459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.160489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.160771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.160801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.161085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.161307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.161338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.161531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.161563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.161776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.161808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.162035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.162068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.162304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.162336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.162610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.162641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.162852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.163088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.163120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.163381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.163412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.163708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.163740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.163987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.164019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.164200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.594 [2024-11-20 16:20:22.164232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.594 qpair failed and we were unable to recover it. 00:27:21.594 [2024-11-20 16:20:22.164527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.164559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.164850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.164881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.165180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.165214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.165473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.165504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.165703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.165735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.166001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.166034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.166274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.166305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.166576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.166608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.166898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.167202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.167234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.167499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.167531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.167796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.167827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.167958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.167991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.168255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.168287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.168557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.168589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.168780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.168816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.169083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.169116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.169407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.169439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.169711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.169741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.170032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.170065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.170335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.170657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.170688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.170968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.171000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.171139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.171171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.171413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.171444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.171648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.171679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.171796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.171845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.172026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.172059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.172260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.172291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.172558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.172590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.172784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.172814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.173007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.173040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.173322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.173354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.173621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.173653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.173782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.173813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.174077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.174111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.174402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.174433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.595 qpair failed and we were unable to recover it. 00:27:21.595 [2024-11-20 16:20:22.174707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.595 [2024-11-20 16:20:22.174738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.174943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.174983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.175268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.175300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.175562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.175594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.175891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.175922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.176126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.176159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.176458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.176661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.176693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.176981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.177013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.177278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.177310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.177528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.177559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.177801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.177832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.178080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.178113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.178365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.178397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.178668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.178699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.178939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.178980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.179226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.179258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.179456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.179487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.179640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.179678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.179880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.179911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.180202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.180234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.180461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.180492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.180767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.180799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.181000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.181033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.181207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.181238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.181499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.181530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.181801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.181832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.182122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.182155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.182424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.182457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.182690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.182721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.182917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.182956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.183145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.183448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.183479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.183772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.183804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.184075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.184107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.184396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.184427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.184685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.184716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.596 [2024-11-20 16:20:22.185025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.596 [2024-11-20 16:20:22.185058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.596 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.185314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.185346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.185547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.185578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.185844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.185877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.186191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.186224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.186474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.186506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.186696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.186727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.186930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.186972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.187251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.187285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.187408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.187438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.187625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.187656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.187898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.187931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.188139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.188170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.188316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.188347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.188552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.188585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.188761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.188793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.188929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.188985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.189314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.189495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.189526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.189651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.189925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.189967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.190147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.190189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.190434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.190465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.190687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.190718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.190939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.190983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.191178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.191211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.191503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.191535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.191857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.192042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.192076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.192373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.192405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.192633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.192665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.192860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.192892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.193105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.193138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.193384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.193417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.193592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.193624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.193876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.193908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.597 qpair failed and we were unable to recover it. 00:27:21.597 [2024-11-20 16:20:22.194131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.597 [2024-11-20 16:20:22.194164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.194365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.194398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.194591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.194622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.194890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.194922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.195208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.195241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.195380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.195410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.195677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.195709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.195927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.195972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.196243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.196274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.196498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.196530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.196821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.196852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.197037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.197070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.197398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.197529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.197560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.197854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.197887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.198025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.198057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.198244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.198278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.198471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.198503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.198750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.198782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.199079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.199112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.199375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.199408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.199554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.199586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.199720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.199752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.199999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.200032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.200247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.200278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.200546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.200584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.200827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.200859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.201110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.201144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.201269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.201300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.201561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.201593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.201863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.598 [2024-11-20 16:20:22.201895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.598 qpair failed and we were unable to recover it. 00:27:21.598 [2024-11-20 16:20:22.202215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.202249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.202452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.202484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.202607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.202638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.202929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.202971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.203123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.203156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.203339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.203373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.203654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.203685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.203970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.204004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.204153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.204186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.204394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.204426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.204693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.204725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.204913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.204944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.205074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.205106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.205300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.205332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.205583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.205615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.205911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.205943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.206243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.206277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.206538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.206570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.206713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.206745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.207016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.207050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.207243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.207275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.207532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.207570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.207765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.207798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.208088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.208122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.208269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.208302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.208455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.208486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.208724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.208756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.209030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.209064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.209411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.209693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.209725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.209977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.210011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.210191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.210224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.210462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.210495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.210790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.210823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.211029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.211063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.211318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.211351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.211605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.599 [2024-11-20 16:20:22.211638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.599 qpair failed and we were unable to recover it. 00:27:21.599 [2024-11-20 16:20:22.211834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.211866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.212080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.212116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.212249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.212281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.212480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.212512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.212800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.212833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.213110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.213143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.213288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.213320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.213582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.213615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.213911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.213943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.214195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.214406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.214438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.214620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.214820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.214852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.215103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.215137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.215416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.215448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.215635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.215668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.215853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.215884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.216151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.216185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.216432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.216464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.216692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.216725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.216955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.216987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.217118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.217150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.217292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.217324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.217433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.217465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.217672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.217710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.217991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.218024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.218215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.218246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.218391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.218423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.218615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.218647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.218839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.218871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.219107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.219141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.219332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.219364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.219615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.219647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.219847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.219878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.220130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.220164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.220339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.220371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.600 qpair failed and we were unable to recover it. 00:27:21.600 [2024-11-20 16:20:22.220619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.600 [2024-11-20 16:20:22.220651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.220867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.220899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.221101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.221134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.221347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.221378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.221652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.221683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.221943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.221985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.222153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.222186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.222378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.222411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.222676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.222708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.222912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.222943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.223203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.223237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.223388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.223420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.223697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.223729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.223984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.224019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.224297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.224329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.224642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.224675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.224929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.224972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.225249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.225280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.225478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.225510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.225731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.225763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.226009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.226043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.226237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.226269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.226489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.226522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.226797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.226829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.227080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.227113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.227380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.227412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.227709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.227741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.227933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.228109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.228146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.228396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.228428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.228739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.228770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.229027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.229060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.229313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.229346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.229542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.229574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.229826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.229857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.230131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.230165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.230456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.601 [2024-11-20 16:20:22.230488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.601 qpair failed and we were unable to recover it. 00:27:21.601 [2024-11-20 16:20:22.230711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.230743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.231041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.231074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.231327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.231360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.231565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.231596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.231795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.231826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.232106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.232159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.232370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.232402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.232657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.232690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.232960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.232994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.233193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.233226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.233485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.233517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.233813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.233845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.234092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.234126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.234359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.234391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.234720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.234752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.235043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.235077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.235356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.235389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.235623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.235897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.236173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.236208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.236406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.236438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.236718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.236750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.237002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.237036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.237341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.237372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.237641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.237673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.237982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.238017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.238218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.238250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.238467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.238744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.238776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.239079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.239295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.602 [2024-11-20 16:20:22.239326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.602 qpair failed and we were unable to recover it. 00:27:21.602 [2024-11-20 16:20:22.239462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.239501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.239774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.239807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.240061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.240095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.240297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.240330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.240605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.240637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.240848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.240880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.241170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.241203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.241429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.241461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.241770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.241972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.242004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.242257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.242290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.242590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.242623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.242890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.242922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.243220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.243253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.243466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.243499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.243776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.243808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.244097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.244240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.244273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.244545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.244576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.244834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.244866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.245087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.245121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.245315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.245346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.245594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.245626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.245931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.245975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.246121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.246153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.246434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.246466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.246760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.246792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.246999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.247167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.247199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.247391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.247424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.247676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.247707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.247817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.247849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.248060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.248095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.248372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.248404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.248680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.248713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.249006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.603 [2024-11-20 16:20:22.249040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.603 qpair failed and we were unable to recover it. 00:27:21.603 [2024-11-20 16:20:22.249285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.249318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.249579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.249612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.249912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.249944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.250204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.250237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.250422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.250460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.250745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.250776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.251039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.251073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.251209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.251241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.251496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.251529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.251708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.251741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.251997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.252031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.252172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.252204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.252481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.252513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.252719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.252752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.252943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.252986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.253235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.253268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.253484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.253516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.253782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.253814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.254047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.254081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.254274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.254307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.254539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.254571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.254841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.254874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.255173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.255207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.255466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.255499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.255777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.255809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.256015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.256049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.256256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.256288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.256549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.256580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.256763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.256796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.257078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.257112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.257314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.257347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.257631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.257664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.257955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.257988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.258129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.258162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.258358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.258391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.258642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.258674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.604 qpair failed and we were unable to recover it. 00:27:21.604 [2024-11-20 16:20:22.258928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.604 [2024-11-20 16:20:22.258972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.259179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.259210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.259439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.259471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.259655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.259686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.259968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.260001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.260269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.260302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.260590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.260623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.260901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.260932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.261167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.261206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.261403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.261436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.261731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.261762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.261975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.262010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.262292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.262324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.262604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.262636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.262836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.262868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.263146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.263181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.263382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.263414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.263700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.264001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.264035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.264191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.264223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.264497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.264529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.264657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.264689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.265001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.265186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.265218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.265417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.265449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.265573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.265605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.265858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.265890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.266178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.266211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.266488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.266521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.266716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.266748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.267023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.267058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.267258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.267592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.267824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.267855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.268137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.268171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.268460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.268494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.605 [2024-11-20 16:20:22.268761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.605 [2024-11-20 16:20:22.268794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.605 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.269089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.269123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.269365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.269397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.269620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.269653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.269832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.269864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.270143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.270177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.270393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.270664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.270697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.270903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.270935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.271160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.271194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.271472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.271505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.271786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.271818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.272020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.272193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.272226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.272502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.272534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.272751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.272783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.273034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.273069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.273294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.273326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.273603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.273635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.273914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.274214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.274246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.274542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.274574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.274844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.274876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.275176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.275209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.275390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.275422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.275721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.275753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.275995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.276029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.276307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.276339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.276630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.276662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.276943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.276985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.277263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.277295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.277546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.277578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.277832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.277864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.278058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.278091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.278371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.278403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.278654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.278686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.278956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.278990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.606 [2024-11-20 16:20:22.279172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.606 [2024-11-20 16:20:22.279204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.606 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.279478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.279511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.279737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.279769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.279970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.280003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.280255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.280287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.280512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.280545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.280743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.280775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.280914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.280958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.281156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.281189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.281438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.281470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.281663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.281695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.281975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.282009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.282308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.282340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.282531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.282563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.282839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.282871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.283404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.283436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.283708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.283741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.284021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.284056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.284265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.284298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.284573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.284604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.284855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.284887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.285199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.285233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.285531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.285563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.285859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.285891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.286163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.286197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.286467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.286499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.286789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.286821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.287095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.287130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.287394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.287426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.287536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.287569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.607 [2024-11-20 16:20:22.287763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.607 [2024-11-20 16:20:22.287796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.607 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.288071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.288105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.288387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.288419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.288619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.288652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.288932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.288974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.289208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.289240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.289389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.289421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.289674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.289707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.289967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.290000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.290300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.290333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.290612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.290644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.290929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.290973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.291232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.291263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.291506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.291538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.291813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.291845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.292097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.292131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.292315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.292347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.292567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.292783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.292815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.293092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.293126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.293329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.293361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.293551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.293583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.293835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.293867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.294171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.294205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.294467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.294506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.294702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.294734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.294989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.295022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.295321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.295353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.295545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.295578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.295851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.295883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.296164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.296197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.296394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.296425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.296614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.296647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.296899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.296931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.297198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.297231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.297530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.297563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.297829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.297860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.608 [2024-11-20 16:20:22.298114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.608 [2024-11-20 16:20:22.298148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.608 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.298413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.298446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.298729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.298761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.299065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.299098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.299301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.299333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.299594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.299625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.299875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.299908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.300179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.300212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.300462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.300770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.300802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.300983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.301017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.301220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.301251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.301432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.301464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.301737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.301769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.302066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.302100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.302372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.302405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.302664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.302696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.302962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.302996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.303248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.303280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.303577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.303609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.303891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.303922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.304211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.304244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.304522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.304770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.304802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.305019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.305052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.305181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.305212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.305487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.305518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.305839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.305876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.306160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.306195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.306474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.306506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.306785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.306817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.307107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.307141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.307418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.307450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.307674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.307706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.307983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.308017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.308303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.308335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.308633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.308666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.609 [2024-11-20 16:20:22.308932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.609 [2024-11-20 16:20:22.308974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.609 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.309271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.309303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.309533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.309566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.309767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.309799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.310060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.310116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.310339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.310371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.310647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.310679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.310969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.311002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.311299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.311331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.311603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.311635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.311829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.311860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.312128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.312162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.312366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.312399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.312599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.312630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.312906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.312939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.313253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.313286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.313557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.313589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.313845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.313878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.314098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.314402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.314433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.314686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.314719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.315029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.315063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.315244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.315277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.315472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.315504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.315806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.315838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.316126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.316160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.316362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.316395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.316680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.316713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.317009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.317043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.317311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.317344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.317540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.317578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.317860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.317892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.318126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.318160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.318435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.318467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.318711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.318744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.318960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.318994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.319204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.319237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.319430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.610 [2024-11-20 16:20:22.319463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.610 qpair failed and we were unable to recover it. 00:27:21.610 [2024-11-20 16:20:22.319670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.319702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.319829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.319860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.320139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.320173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.320453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.320485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.320772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.320805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.321083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.321117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.321323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.321356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.321657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.321689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.321888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.321920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.322208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.322242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.322542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.322574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.322839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.322871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.323064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.323097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.323351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.323383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.323520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.323553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.323830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.323862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.324066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.324099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.324355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.324388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.324580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.324613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.324869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.324901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.325135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.325168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.325447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.325480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.325758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.325790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.326095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.326128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.326381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.326414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.326633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.326665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.326868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.326900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.327109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.327142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.327343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.327375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.327668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.327862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.327893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.328167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.328201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.328407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.328446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.328716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.328748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.329036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.329069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.329328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.329360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.329627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.329660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.329933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.611 [2024-11-20 16:20:22.329973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.611 qpair failed and we were unable to recover it. 00:27:21.611 [2024-11-20 16:20:22.330123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.330156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.330419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.330451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.330643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.330675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.330962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.330996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.331203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.331236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.331484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.331517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.331722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.331754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.332054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.332087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.332356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.332388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.332584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.332888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.332921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.333132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.333164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.333436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.333469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.333662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.333695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.333973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.334007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.334216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.334248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.334545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.334577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.334847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.334879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.335103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.335137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.335341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.335373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.335673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.335705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.335976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.336010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.336295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.336328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.336608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.336641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.336864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.337084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.337118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.337312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.337344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.337611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.337643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.337780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.337812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.338069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.338103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.338308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.338340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.338614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.338646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.612 [2024-11-20 16:20:22.338920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.612 [2024-11-20 16:20:22.338962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.612 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.339244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.339277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.339461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.339499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.339681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.339713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.339984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.340018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.340296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.340328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.340534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.340566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.340766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.340798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.341049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.341082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.341286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.341319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.341593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.341626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.341807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.342111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.342146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.342422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.342455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.342645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.342678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.342860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.342892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.343191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.343225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.343459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.343491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.343740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.343772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.344033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.344067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.344321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.344354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.344560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.344592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.344800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.344831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.345109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.345143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.345429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.345461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.345737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.345769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.345970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.346004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.346308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.346341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.346596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.346628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.346888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.346921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.347222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.347255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.347525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.347558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.347763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.347795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.348062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.348096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.348307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.348338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.348587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.348619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.348873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.348905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.349130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.349163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.613 [2024-11-20 16:20:22.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.613 [2024-11-20 16:20:22.349469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.613 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.349736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.349770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.350047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.350081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.350371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.350405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.350588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.350620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.350912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.350946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.351191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.351224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.351543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.351576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.351849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.351883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.352085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.352120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.352282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.352315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.352574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.352606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.352800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.352833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.353041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.353075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.353297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.353330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.353525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.353558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.353882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.353915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.354075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.354109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.354306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.354340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.354562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.354595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.354860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.354891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.355190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.355224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.355375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.355406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.355548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.355580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.355855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.355888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.356182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.356216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.356489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.356521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.356711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.356743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.356933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.356978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.357258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.357292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.357585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.357616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.357895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.357935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.358215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.358249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.358508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.358763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.358795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.359110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.359145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.359361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.359394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.359677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.359710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.614 [2024-11-20 16:20:22.360037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.614 [2024-11-20 16:20:22.360072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.614 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.360266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.360299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.360483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.360516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.360710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.360743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.361020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.361054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.361236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.361268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.361486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.361519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.361724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.361756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.361994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.362027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.362285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.362319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.362616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.362648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.362850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.362883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.363102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.363137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.363420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.363453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.363664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.363697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.363965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.363999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.364134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.364167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.364423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.364455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.364650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.364684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.365005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.365216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.365249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.365429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.365643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.365675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.365907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.365939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.366256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.366290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.366590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.366622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.366914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.366957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.367181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.367213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.367413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.367446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.367649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.367681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.367907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.367940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.368179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.368213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.368422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.368454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.368647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.368685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.368886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.368919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.369209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.369540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.369573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.369777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.369808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.615 [2024-11-20 16:20:22.369990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.615 [2024-11-20 16:20:22.370024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.615 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.370245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.370277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.370526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.370559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.370835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.370869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.371122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.371155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.371297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.371331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.371611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.371644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.371879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.371911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.372116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.372150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.372413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.372718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.372751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.372956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.372991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.373277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.373310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.373492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.373525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.373634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.373666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.373972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.374006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.374284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.374317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.374540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.374574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.374788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.374929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.374973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.375194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.375227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.375414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.375446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.375603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.375636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.375832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.375864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.376074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.376108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.376331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.376364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.376557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.376589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.376722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.376754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.376964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.376999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.377252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.377285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.377482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.377515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.377631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.377663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.377968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.616 [2024-11-20 16:20:22.378002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.616 qpair failed and we were unable to recover it. 00:27:21.616 [2024-11-20 16:20:22.378148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.378182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.378321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.378352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.378609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.378648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.378851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.378883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.379094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.379128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.379322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.379355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.379463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.379494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.379627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.379659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.379856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.379889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.380191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.380228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.380429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.380472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.380609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.380643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.380872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.380905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.381194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.381231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.381509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.381545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.381754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.381788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.382049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.382085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.382206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.382479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.382515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.382646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.382680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.382871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.382906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.383100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.383135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.383332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.383371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.383572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.383614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.383769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.383802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.383986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.384019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.384209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.384241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.384430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.384463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.384771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.384809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.384973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.385010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.385294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.617 [2024-11-20 16:20:22.385327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.617 qpair failed and we were unable to recover it. 00:27:21.617 [2024-11-20 16:20:22.385511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.900 [2024-11-20 16:20:22.385543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.900 qpair failed and we were unable to recover it. 00:27:21.900 [2024-11-20 16:20:22.385745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.900 [2024-11-20 16:20:22.385777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.386008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.386043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.386264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.386296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.386503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.386721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.386754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.386987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.387171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.387208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.387348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.387380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.387565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.387599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.387784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.387820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.388010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.388054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.388177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.388209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.388425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.388460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.388614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.388648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.388904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.388938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.389211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.389248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.389517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.389550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.389681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.389723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.390026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.390062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.390265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.390300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.390505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.390539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.390664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.390697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.390821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.390856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.391109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.391143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.391429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.391462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.391594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.391627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.391878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.391911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.392120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.392153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.392338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.392590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.392623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.392810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.392842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.393046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.393080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.393288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.393320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.393534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.393566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.393765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.393796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.393991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.901 [2024-11-20 16:20:22.394026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.901 qpair failed and we were unable to recover it. 00:27:21.901 [2024-11-20 16:20:22.394167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.394199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.394334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.394366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.394664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.394696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.394824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.394856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.395050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.395084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.395211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.395244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.395367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.395399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.395547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.395579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.395855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.396199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.396233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.396442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.396474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.396778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.396810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.397113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.397147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.397346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.397379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.397562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.397600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.397850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.397883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.398078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.398112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.398299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.398331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.398530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.398562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.398845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.398877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.399161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.399195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.399478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.399510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.399691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.399724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.399860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.399891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.400170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.400204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.400432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.400464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.400574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.400607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.400860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.400892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.401157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.401191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.401444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.401477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.401772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.401803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.402100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.402134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.402412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.402444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.402727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.402759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.403027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.403061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.403356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.403389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.403542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.403574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.902 [2024-11-20 16:20:22.403824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.902 [2024-11-20 16:20:22.403856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.902 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.404130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.404164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.404443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.404476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.404703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.404736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.404927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.404973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.405228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.405260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.405456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.405487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.405680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.405712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.405921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.405965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.406275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.406317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.406578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.406609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.406893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.407211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.407245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.407515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.407548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.407731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.407763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.407958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.407993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.408192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.408224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.408396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.408622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.408654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.408857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.408888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.409098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.409132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.409408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.409440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.409655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.409687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.409870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.409902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.410195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.410229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.410487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.410519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.410707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.410739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.411019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.411053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.411246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.411279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.411535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.411566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.411837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.411869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.412156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.412189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.412395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.412427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.412567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.412598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.412873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.412905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.413210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.413243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.413529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.413562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.903 [2024-11-20 16:20:22.413841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.903 [2024-11-20 16:20:22.413872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.903 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.414182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.414477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.414508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.414763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.414795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.415097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.415132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.415368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.415399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.415653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.415990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.416024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.416296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.416327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.416623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.416656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.416969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.417002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.417256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.417288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.417469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.417501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.417802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.417834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.418101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.418135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.418266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.418297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.418552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.418584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.418837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.418868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.419170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.419204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.419409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.419441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.419669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.419707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.419890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.419921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.420141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.420174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.420366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.420398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.420671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.420703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.420982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.421015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.421296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.421329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.421477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.421509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.421686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.421718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.421874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.421906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.422060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.422094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.422229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.422261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.422559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.422591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.422775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.422806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.423100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.423134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.423406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.423439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.904 [2024-11-20 16:20:22.423732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.904 [2024-11-20 16:20:22.423764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.904 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.424038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.424362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.424394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.424589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.424620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.424878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.424908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.425122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.425156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.425363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.425394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.425532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.425565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.425767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.425996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.426029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.426215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.426247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.426455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.426488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.426680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.426712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.426998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.427032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.427300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.427332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.427601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.427633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.427848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.427880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.428018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.428051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.428265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.428296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.428587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.428619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.428923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.428964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.429217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.429249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.429446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.429478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.429730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.429761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.430013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.430053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.430250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.430281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.430556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.430590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.430792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.430823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.905 [2024-11-20 16:20:22.430963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.905 [2024-11-20 16:20:22.430996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.905 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.431217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.431248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.431555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.431589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.431860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.431892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.432228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.432265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.432543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.432576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.432704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.432735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.432924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.432966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.433154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.433186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.433466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.433498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.433699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.433732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.433990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.434024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.434240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.434272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.434470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.434502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.434776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.434807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.435064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.435098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.435377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.435708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.435739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.436009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.436044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.436252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.436284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.436488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.436520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.436705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.436736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.436986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.437020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.437241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.437273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.437537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.437569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.437697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.437728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.438003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.438036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.438313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.438346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.438593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.438888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.438920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.439226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.439258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.439458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.439490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.439707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.439739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.439990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.440024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.440165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.440197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.440457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.440490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.906 qpair failed and we were unable to recover it. 00:27:21.906 [2024-11-20 16:20:22.440764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.906 [2024-11-20 16:20:22.440801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.441017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.441051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.441278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.441309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.441503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.441536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.441818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.441850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.442032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.442065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.442269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.442300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.442494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.442525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.442636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.442666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.442942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.442998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.443199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.443231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.443452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.443484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.443628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.443659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.443942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.443985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.444282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.444315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.444450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.444481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.444800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.444833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.444942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.444994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.445269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.445301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.445575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.445606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.445897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.445929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.446100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.446133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.446414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.446446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.446732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.446764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.446893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.446925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.447074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.447108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.447363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.447394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.447706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.447989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.448023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.448287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.448319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.448472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.448503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.448804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.448836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.449146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.449347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.449379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.907 [2024-11-20 16:20:22.449578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.907 [2024-11-20 16:20:22.449610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.907 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.449789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.449821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.450070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.450103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.450359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.450391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.450698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.450730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.450977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.451010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.451300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.451339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.451604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.451637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.451904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.451935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.452268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.452492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.452523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.452704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.452737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.453003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.453037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.453241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.453285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.453514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.453549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.453772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.453802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.454084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.454121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.454402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.454435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.454621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.454653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.454923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.454971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.455177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.455210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.455486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.455518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.455734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.455766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.456065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.456099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.456366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.456399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.456580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.456611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.456888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.456921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.457151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.457184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.457464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.457497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.457778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.457809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.458114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.458147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.458367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.458400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.458665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.458696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.458910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.458942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.459206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.459240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.459514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.459546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.459824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.908 [2024-11-20 16:20:22.459857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.908 qpair failed and we were unable to recover it. 00:27:21.908 [2024-11-20 16:20:22.460006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.460041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.460316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.460347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.460621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.460653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.460863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.460896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.461141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.461175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.461425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.461457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.461721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.461753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.462032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.462066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.462355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.462387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.462664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.462702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.462919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.462959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.463237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.463551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.463581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.463864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.463896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.464181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.464215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.464497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.464529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.464815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.464848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.465032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.465065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.465318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.465350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.465628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.465660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.465910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.465942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.466222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.466255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.466540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.466572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.466851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.466885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.467138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.467171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.467428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.467461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.467759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.467793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.468011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.468045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.468312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.468345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.468640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.468672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.468943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.468993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.469192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.469224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.469501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.469533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.469727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.469758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.470001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.470034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.470330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.470362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.470648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.470680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.909 [2024-11-20 16:20:22.470836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.909 [2024-11-20 16:20:22.470867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.909 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.471146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.471180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.471513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.471778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.471810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.472002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.472037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.472314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.472346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.472565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.472794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.472827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.473068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.473101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.473377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.473409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.473591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.473622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.473899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.473931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.474258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.474297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.474587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.474619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.474826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.474858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.475132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.475166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.475365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.475397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.475583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.475614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.475865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.475898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.476210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.476244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.476492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.476524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.476835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.476867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.477154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.477189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.477343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.477376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.477594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.477626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.477846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.477878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.910 [2024-11-20 16:20:22.478176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.910 [2024-11-20 16:20:22.478209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.910 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.478421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.478454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.478706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.478737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.479010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.479045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.479325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.479356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.479586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.479618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.479840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.479871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.480122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.480155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.480352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.480384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.480566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.480598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.480780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.480811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.481013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.481047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.481255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.481287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.481567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.481601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.481729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.481760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.481984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.482017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.482243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.482275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.482575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.482606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.482805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.482836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.483146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.483435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.483467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.483715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.483747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.484016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.484048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.484170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.484202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.484458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.484489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.484672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.484703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.484975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.485009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.485220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.485253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.485508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.485540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.485810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.485843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.486044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.486077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.486270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.486303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.486584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.486615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.486823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.486855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.487154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.487187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.487456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.487487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.487628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.911 [2024-11-20 16:20:22.487661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.911 qpair failed and we were unable to recover it. 00:27:21.911 [2024-11-20 16:20:22.487848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.487879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.488171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.488204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.488419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.488451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.488643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.488675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.488963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.488996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.489276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.489308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.489533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.489565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.489714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.489746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.489927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.489982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.490256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.490288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.490563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.490595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.490869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.490901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.491099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.491132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.491330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.491362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.491632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.491663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.491970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.492005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.492287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.492325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.492628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.492660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.492919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.492962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.493256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.493287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.493538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.493569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.493830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.493862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.494119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.494152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.494383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.494589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.494620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.494841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.494872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.495067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.495101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.495358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.495390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.495589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.495621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.495822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.495853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.496047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.496080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.496298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.496331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.496581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.496612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.496881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.496913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.497176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.497209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.497406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.497437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.497694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.497726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.912 qpair failed and we were unable to recover it. 00:27:21.912 [2024-11-20 16:20:22.498081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.912 [2024-11-20 16:20:22.498115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.498322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.498354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.498471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.498502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.498777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.498809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.499089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.499122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.499412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.499444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.499719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.499751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.500042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.500243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.500274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.500554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.500587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.500886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.500918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.501229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.501263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.501545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.501576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.501862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.501895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.502114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.502147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.502418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.502728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.502759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.503028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.503062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.503329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.503360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.503665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.503702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.503896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.503927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.504132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.504165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.504421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.504452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.504636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.504667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.504875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.504906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.505137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.505171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.505423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.505454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.505658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.505690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.505942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.505983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.506177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.506209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.506408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.506439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.506714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.506746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.507003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.507037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.507270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.507302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.507561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.507592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.913 [2024-11-20 16:20:22.507850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.913 [2024-11-20 16:20:22.507881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.913 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.508186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.508220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.508478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.508510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.508761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.508793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.508991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.509024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.509306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.509338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.509628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.509660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.509992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.510026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.510281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.510313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.510590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.510621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.510822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.510854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.511078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.511111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.511340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.511372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.511572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.511604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.511745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.511777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.512054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.512086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.512298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.512330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.512584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.512614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.512822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.512853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.513127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.513160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.513453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.513485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.513767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.513799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.514089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.514123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.514337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.514369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.514595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.514640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.514922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.514966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.515220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.515251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.515545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.515576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.515853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.515885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.516179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.516211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.516404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.516437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.516643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.516675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.516871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.516902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.517191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.517224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.517423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.517456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.517728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.517759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.518033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.518086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.914 [2024-11-20 16:20:22.518366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.914 [2024-11-20 16:20:22.518398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.914 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.518696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.518728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.518999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.519032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.519312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.519345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.519653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.519685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.519966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.519999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.520133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.520311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.520619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.520650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.520844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.520875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.521153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.521186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.521466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.521499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.521781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.521812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.522035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.522069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.522277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.522309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.522508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.522540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.522756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.522788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.523059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.523093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.523388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.523420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.523712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.523744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.524022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.524055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.524345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.524378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.524661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.524693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.524894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.524925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.525116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.525350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.525381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.525603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.525636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.525914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.525962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.526236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.526269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.526405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.526436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.526717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.526749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.527003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.527037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.527218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.527251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.915 [2024-11-20 16:20:22.527532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.915 [2024-11-20 16:20:22.527564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.915 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.527764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.527797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.528031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.528064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.528206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.528238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.528451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.528484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.528614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.528646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.528853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.528884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.529168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.529201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.529408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.529440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.529731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.529763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.530062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.530096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.530277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.530310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.530563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.530595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.530890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.530923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.531225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.531258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.531518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.531550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.531773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.531804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.532049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.532385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.532417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.532699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.532732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.533011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.533044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.533276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.533308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.533524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.533556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.533749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.533781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.534061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.534094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.534275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.534307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.534499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.534531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.534666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.534698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.534893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.534925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.535238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.535271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.535546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.535578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.535720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.535751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.536027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.536061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.536342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.536375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.536658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.536697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.916 [2024-11-20 16:20:22.536977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.916 [2024-11-20 16:20:22.537010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.916 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.537325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.537357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.537619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.537652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.537963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.537996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.538137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.538169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.538454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.538487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.538690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.538721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.538998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.539033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.539318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.539348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.539625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.539658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.539942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.539988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.540242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.540274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.540574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.540606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.540835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.540867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.541127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.541161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.541456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.541489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.541758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.541791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.542040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.542074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.542287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.542418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.542450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.542730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.542761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.542999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.543033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.543174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.543206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.543344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.543375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.543653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.543685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.543990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.544023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.544287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.544320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.544621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.544652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.544917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.544981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.545177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.545210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.545487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.545519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.545797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.545829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.546091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.546125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.546399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.546432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.546628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.546660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.546874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.546905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.547118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.917 [2024-11-20 16:20:22.547152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.917 qpair failed and we were unable to recover it. 00:27:21.917 [2024-11-20 16:20:22.547403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.547435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.547617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.547648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.547946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.547995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.548261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.548613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.548891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.548922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.549168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.549202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.549455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.549487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.549746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.549778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.549983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.550017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.550281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.550314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.550579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.550610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.550866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.550898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.551211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.551244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.551534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.551566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.551844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.551875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.552075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.552109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.552312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.552343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.552596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.552628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.552925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.552965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.553239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.553271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.553546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.553579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.553801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.553832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.554114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.554149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.554430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.554462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.554659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.554690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.554943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.554986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.555283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.555316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.555576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.555608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.555910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.555942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.556209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.556243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.556535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.556567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.556746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.556778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.557053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.557088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.557296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.557327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.918 qpair failed and we were unable to recover it. 00:27:21.918 [2024-11-20 16:20:22.557596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.918 [2024-11-20 16:20:22.557629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.557925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.557968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.558219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.558251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.558450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.558482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.558763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.558794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.559080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.559115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.559338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.559369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.559677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.559714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.560027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.560062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.560318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.560351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.560608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.560640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.560943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.560994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.561189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.561221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.561518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.561550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.561831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.561863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.562142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.562176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.562455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.562487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.562774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.562807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.563031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.563064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.563340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.563372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.563516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.563549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.563770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.563801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.563994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.564028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.564335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.564368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.564624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.564655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.564921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.564960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.565236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.565268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.565569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.565601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.565808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.565839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.566113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.566147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.566439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.566471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.566767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.566799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.567077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.567112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.567396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.567427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.567729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.567762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.568030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.568063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.919 [2024-11-20 16:20:22.568287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.919 [2024-11-20 16:20:22.568320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.919 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.568540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.568573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.568763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.568795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.569007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.569042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.569322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.569354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.569636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.569668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.569961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.569995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.570120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.570153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.570428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.570461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.570753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.570785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.571081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.571115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.571268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.571312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.571564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.571595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.571795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.571827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.572026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.572059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.572336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.572368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.572621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.572653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.572860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.573117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.573150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.573447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.573479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.573691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.573723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.574002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.574036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.574312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.574344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.574542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.574574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.574795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.574826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.575088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.575122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.575429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.575461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.575618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.575866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.575898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.576122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.920 [2024-11-20 16:20:22.576156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.920 qpair failed and we were unable to recover it. 00:27:21.920 [2024-11-20 16:20:22.576411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.576444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.576707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.576739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.577043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.577077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.577343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.577376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.577629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.577661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.577914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.577946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.578141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.578174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.578454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.578486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.578775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.578809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.579090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.579125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.579405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.579436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.579737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.579918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.579958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.580234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.580266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.580547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.580580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.580777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.580810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.581083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.581117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.581297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.581329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.581606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.581638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.581862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.581894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.582181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.582215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.582465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.582504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.582784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.582816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.583083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.583117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.583396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.583429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.583715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.583748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.583875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.583907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.584219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.584253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.584524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.584556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.584751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.584783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.584982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.585016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.585270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.585303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.585495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.585527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.585723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.585754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.586063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.586206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.921 qpair failed and we were unable to recover it. 00:27:21.921 [2024-11-20 16:20:22.586488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.921 [2024-11-20 16:20:22.586520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.586713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.586745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.587060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.587200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.587232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.587428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.587459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.587568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.587601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.587782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.587813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.588078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.588111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.588365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.588396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.588608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.588640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.588906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.588938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.589241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.589274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.589535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.589568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.589763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.589795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.590080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.590114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.590369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.590401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.590702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.591003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.591036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.591245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.591277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.591480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.591512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.591769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.591801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.592058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.592092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.592238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.592271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.592550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.592582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.592834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.592866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.593130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.593170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.593466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.593498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.593764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.593796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.593945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.593987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.594182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.594215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.594363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.594396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.594625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.594656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.594968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.595002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.595281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.595314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.595592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.595624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.595913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.595945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.922 qpair failed and we were unable to recover it. 00:27:21.922 [2024-11-20 16:20:22.596223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.922 [2024-11-20 16:20:22.596255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.596447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.596479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.596754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.596786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.597000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.597035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.597313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.597345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.597625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.597657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.597965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.598255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.598287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.598558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.598591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.598888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.598920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.599183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.599216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.599513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.599545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.599819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.599851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.600033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.600066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.600316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.600349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.600600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.600633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.600839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.600872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.601147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.601181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.601411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.601445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.601717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.601749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.602045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.602079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.602272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.602305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.602604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.602636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.602906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.602938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.603201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.603234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.603434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.603466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.603649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.603681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.603936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.603978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.604232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.604264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.604517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.604555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.604758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.604791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.605058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.605093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.605314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.605346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.605596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.605628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.605893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.605924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.606117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.606150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.606332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.606364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.606612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.923 [2024-11-20 16:20:22.606644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.923 qpair failed and we were unable to recover it. 00:27:21.923 [2024-11-20 16:20:22.606922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.606963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.607242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.607275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.607551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.607583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.607876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.607909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.608189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.608222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.608450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.608483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.608687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.608719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.608920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.608978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.609129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.609161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.609415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.609447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.609744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.609776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.610072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.610106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.610393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.610425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.610701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.610733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.610871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.610903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.611211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.611245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.611389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.611421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.611674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.611706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.611921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.611961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.612229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.612262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.612515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.612546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.612853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.612884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.613147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.613181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.613384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.613416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.613716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.613748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.613931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.613974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.614166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.614198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.614392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.614424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.614569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.614602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.614878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.614909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.615221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.615256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.615534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.615572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.615851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.615883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.616164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.616198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.616454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.616486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.616683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.616715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.924 [2024-11-20 16:20:22.616930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.924 [2024-11-20 16:20:22.616971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.924 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.617276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.617308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.617492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.617524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.617725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.617756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.618031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.618064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.618367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.618400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.618659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.618691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.618915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.618955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.619258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.619291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.619601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.619633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.619918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.619960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.620110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.620142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.620422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.620454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.620733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.620764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.620991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.621024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.621330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.621361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.621639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.621671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.621800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.621832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.622126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.622159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.622427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.622459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.622685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.622718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.622922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.622960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.623220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.623254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.623529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.623561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.623765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.623797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.624075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.624107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.624401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.624433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.624724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.624756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.624939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.624981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.625256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.625288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.625554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.625587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.625785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.625817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.625996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.626031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.626285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.626317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.626587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.626620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.925 qpair failed and we were unable to recover it. 00:27:21.925 [2024-11-20 16:20:22.626898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.925 [2024-11-20 16:20:22.626930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.627223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.627256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.627447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.627479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.627756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.627787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.628039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.628073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.628338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.628370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.628577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.628608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.628868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.628900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.629197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.629231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.629499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.629530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.629824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.629856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.630049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.630083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.630397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.630429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.630701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.630732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.631018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.631051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.631253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.631285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.631496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.631527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.631730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.631762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.631960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.631994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.632245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.632277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.632415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.632447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.632695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.632727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.633030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.633063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.633178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.633210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.633464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.633496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.633694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.633725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.634002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.634036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.634318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.634356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.634635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.634667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.634794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.634825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.635029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.635250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.635282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.635504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.635535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.635786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.635817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.636073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.636106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.636404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.636437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.636712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.926 [2024-11-20 16:20:22.636743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.926 qpair failed and we were unable to recover it. 00:27:21.926 [2024-11-20 16:20:22.637023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.637056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.637260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.637292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.637543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.637574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.637840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.637872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.638135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.638168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.638442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.638473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.638744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.638775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.638994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.639026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.639331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.639364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.639624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.639655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.639883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.640175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.640208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.640507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.640993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.641026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.641206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.641238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.641556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.641587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.641873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.641905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.642128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.642163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.642345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.642376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.642648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.642680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.642969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.643003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.643283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.643315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.643596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.643627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.643936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.643981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.644277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.644310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.644568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.644599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.644854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.644886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.645119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.645154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.645426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.645458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.645588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.645625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.645820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.645852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.646107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.646140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.646328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.646359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.646538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.646570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.646848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.646879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.927 [2024-11-20 16:20:22.647154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.927 [2024-11-20 16:20:22.647187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.927 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.647402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.647434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.647658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.647689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.647942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.647984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.648310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.648584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.648616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.648866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.648896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.649095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.649128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.649356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.649387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.649582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.649614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.649790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.649822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.650045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.650078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.650358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.650390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.650651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.650682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.650962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.650995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.651109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.651141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.651356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.651388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.651680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.651713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.651908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.651940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.652250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.652283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.652540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.652572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.652880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.652913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.653216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.653250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.653510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.653541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.653841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.653873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.654031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.654065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.654339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.654372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.654636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.654667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.654925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.654965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.655185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.655217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.655490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.655522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.655791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.655823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.656043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.656077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.656335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.656367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.656624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.656662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.928 qpair failed and we were unable to recover it. 00:27:21.928 [2024-11-20 16:20:22.656856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.928 [2024-11-20 16:20:22.656887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.657109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.657142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.657408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.657440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.657622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.657654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.657931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.657976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.658180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.658213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.658486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.658519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.658797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.658828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.659093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.659128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.659400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.659431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.659710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.659742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.659970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.660004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.660209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.660241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.660499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.660531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.660810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.660842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.661124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.661158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.661446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.661478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.661728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.661760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.662034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.662068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.662272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.662303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.662519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.662551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.662856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.662887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.663176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.663210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.663338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.663370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.663550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.663582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.663774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.663806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.664072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.664107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.664357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.664389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.664572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.664605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.664805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.664837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.665110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.665144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.665345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.665377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.665664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.665968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.666002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.666236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.666268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.666568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.666601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.666893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.666924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.667203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.667237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.667492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.667523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.667719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.667756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.929 [2024-11-20 16:20:22.667980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.929 [2024-11-20 16:20:22.668014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.929 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.668232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.668509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.668541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.668823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.668856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.669086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.669120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.669323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.669355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.669608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.669639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.669788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.669820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.670009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.670043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.670316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.670348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.670481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.670514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.670725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.670757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.671010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.671043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.671344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.671376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.671642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.671674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.671978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.672011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.672265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.672297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.672610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.672642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.672892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.672923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.673227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.673261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.673523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.673555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.673852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.674081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.674114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.674392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.674424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.674574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.674606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.674911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.674944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.675228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.675261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.675512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.675543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.675843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.675875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.676155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.676189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.676467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.676498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.676758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.676790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.677052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.677086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.677290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.677322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.677463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.677495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.677777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.677808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.678005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.678039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.678243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.678274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.678539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.678570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.678787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.678827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.679149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.679274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.679305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.679527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.679560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.679811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.679842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.680120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.680154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.680433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.680465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.680758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.680789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.680939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.680988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.681269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.681301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.681548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.681580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.681879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.681909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.682216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.682250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.682449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.682480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.682705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.930 [2024-11-20 16:20:22.682737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.930 qpair failed and we were unable to recover it. 00:27:21.930 [2024-11-20 16:20:22.682987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.683020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.683225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.683257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.683557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.683588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.683877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.683908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.684124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.684157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.684340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.684371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.684619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.684651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.684844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.685105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.685139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.685252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.685284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.685480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.685512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.685793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.685824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.686132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.686166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.686428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.686459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.686711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.686743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.686884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.686916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.687069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.687101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.687302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.687333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.687465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.687496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.687691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.687723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.687929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.687969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.688151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.688183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.688470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.688502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.688772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.688803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.689018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.689052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.689330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.689367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.689586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.689618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.689895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.689927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.690079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.690111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.690364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.690395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.690617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.690648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.690866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.690897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.691183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.691216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.691471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.691502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.691754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.691786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.692052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.692086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.692287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.692319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.692443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.692474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.692666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.692698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.692904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.692935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.693224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.693257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.693525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.693557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.693855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.693888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.694159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.694192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.694481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.694513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.694791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.694822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.694973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.695007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.695289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.695320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.695520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.695803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.695835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.696054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.696087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.696267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.696299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.696508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.696540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.931 [2024-11-20 16:20:22.696811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.931 [2024-11-20 16:20:22.696842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.931 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.697103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.697137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.697390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.697421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.697720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.697752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.697963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.697997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.698273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.698306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.698562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.698593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.698844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.698876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.699072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.699105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.699388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.699419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.699698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.699730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.699963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.699996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.700303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.700341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.700474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.700729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.700760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.700980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.701014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.701269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.701301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.701563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.701594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.701805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.701837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.702096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.702130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.702432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.702464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.702733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.702765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.702979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.703013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.703215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.703247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.703467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.703499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.703700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.703732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.704038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.704071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.704285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.704317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.704594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.704626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.704903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.704935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.705226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.705259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.705490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.705522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.705773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.705805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.706068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.706100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.706317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.706351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.706555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.706589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.706864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.706896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.707207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.707245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.707458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.707491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.707712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.707745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.707960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.707994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.708188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.708220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.708522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.708553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.708753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.708788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.709073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.709108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.709380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.709411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.709601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.709635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.709906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.709940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.710219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.932 [2024-11-20 16:20:22.710252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.932 qpair failed and we were unable to recover it. 00:27:21.932 [2024-11-20 16:20:22.710508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.933 [2024-11-20 16:20:22.710540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.933 qpair failed and we were unable to recover it. 00:27:21.933 [2024-11-20 16:20:22.710750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.933 [2024-11-20 16:20:22.710782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.933 qpair failed and we were unable to recover it. 00:27:21.933 [2024-11-20 16:20:22.711067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.933 [2024-11-20 16:20:22.711102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.933 qpair failed and we were unable to recover it. 00:27:21.933 [2024-11-20 16:20:22.711403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.933 [2024-11-20 16:20:22.711442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:21.933 qpair failed and we were unable to recover it. 00:27:22.211 [2024-11-20 16:20:22.711717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.211 [2024-11-20 16:20:22.711750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.211 qpair failed and we were unable to recover it. 00:27:22.211 [2024-11-20 16:20:22.711995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.211 [2024-11-20 16:20:22.712029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.211 qpair failed and we were unable to recover it. 00:27:22.211 [2024-11-20 16:20:22.712316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.211 [2024-11-20 16:20:22.712348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.211 qpair failed and we were unable to recover it. 00:27:22.211 [2024-11-20 16:20:22.712548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.211 [2024-11-20 16:20:22.712581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.211 qpair failed and we were unable to recover it. 00:27:22.211 [2024-11-20 16:20:22.712834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.211 [2024-11-20 16:20:22.712867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.211 qpair failed and we were unable to recover it. 00:27:22.211 [2024-11-20 16:20:22.713168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.713204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.713473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.713507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.713761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.713793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.713979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.714013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.714224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.714257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.714519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.714552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.714851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.714884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.715146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.715182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.715477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.715511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.715751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.715790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.715978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.716011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.716237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.716271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.716480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.716513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.716796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.716828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.717039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.717075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.717362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.717395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.717666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.717697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.717995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.718028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.718302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.718334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.718607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.718639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.718937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.718998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.719300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.719333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.719538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.719569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.719822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.719855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.720135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.720170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.720451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.720483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.720763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.720795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.720991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.721025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.721243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.721275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.721472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.721504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.721754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.721786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.722088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.722123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.722378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.722410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.722613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.722644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.722858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.722896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.723102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.723135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.723408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.212 [2024-11-20 16:20:22.723441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.212 qpair failed and we were unable to recover it. 00:27:22.212 [2024-11-20 16:20:22.723726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.723759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.724018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.724052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.724350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.724383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.724676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.724708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.724928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.724972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.725256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.725289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.725566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.725598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.725885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.725917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.726217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.726250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.726517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.726550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.726750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.726782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.727047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.727081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.727364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.727396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.727518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.727549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.727743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.727775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.727993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.728027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.728334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.728366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.728628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.728660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.728852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.728884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.729166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.729201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.729479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.729511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.729805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.729837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.730099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.730133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.730333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.730364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.730563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.730596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.730877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.730909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.731194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.731229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.731543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.731728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.731761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.732041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.732075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.732358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.732391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.732608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.732640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.732918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.732958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.733218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.733250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.733541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.733573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.733870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.733902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.734219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.734253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.213 [2024-11-20 16:20:22.734504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.213 [2024-11-20 16:20:22.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.213 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.734855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.734887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.735185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.735219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.735435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.735469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.735735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.735767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.735971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.736005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.736215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.736248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.736501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.736533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.736786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.736818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.737121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.737155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.737439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.737471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.737755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.737787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.738096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.738130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.738389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.738422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.738711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.738744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.739021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.739056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.739246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.739278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.739546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.739578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.739860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.739892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.740173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.740207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.740462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.740494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.740701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.740734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.740925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.740968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.741176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.741209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.741410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.741443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.741693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.741725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.741920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.741962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.742190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.742223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.742445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.742477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.742606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.742638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.742852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.742883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.742994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.743028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.743284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.743315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.743569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.743600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.743822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.743854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.744050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.744085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.744313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.744345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.214 [2024-11-20 16:20:22.744646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.214 [2024-11-20 16:20:22.744678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.214 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.744892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.744924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.745225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.745258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.745524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.745562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.745797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.745830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.746115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.746148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.746289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.746322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.746631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.746890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.746922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.747152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.747185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.747465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.747498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.747773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.747806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.748098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.748131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.748405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.748437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.748725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.748758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.749019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.749052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.749280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.749312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.749573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.749605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.749887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.749920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.750208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.750240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.750519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.750551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.750836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.750868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.751069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.751103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.751363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.751396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.751715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.751747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.751939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.751985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.752250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.752282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.752563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.752595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.752802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.752834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.753087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.753121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.753432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.753465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.753714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.753747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.754057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.754091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.754334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.754365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.754640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.754673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.754874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.754906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.755128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.755161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.215 [2024-11-20 16:20:22.755412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.215 [2024-11-20 16:20:22.755445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.215 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.755638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.755670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.755956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.755989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.756214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.756246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.756524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.756557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.756841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.756873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.757151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.757192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.757399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.757431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.757614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.757646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.757919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.757963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.758252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.758284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.758422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.758455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.758635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.758667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.758907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.758939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.759247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.759280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.759475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.759507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.759785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.759816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.760099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.760134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.760332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.760364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.760642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.760674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.760918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.760961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.761240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.761271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.761479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.761511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.761788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.761820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.762095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.762129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.762339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.762372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.762649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.762681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.762934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.762984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.763280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.763312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.763562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.763594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.763855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.763886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.764194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.764228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.216 [2024-11-20 16:20:22.764479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.216 [2024-11-20 16:20:22.764512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.216 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.764777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.764810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.765069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.765102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.765404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.765437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.765703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.765735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.766034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.766069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.766219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.766251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.766526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.766559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.766871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.766904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.767206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.767240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.767419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.767451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.767636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.767668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.767924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.767965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.768222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.768255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.768508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.768551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.768761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.768796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.769073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.769106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.769301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.769336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.769592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.769627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.769766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.769806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.769994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.770028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.770311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.770344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.770543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.770579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.770784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.770822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.771105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.771139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.771267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.771299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.771572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.771603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.771784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.771817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.772094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.772128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.772409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.772441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.772727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.772759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.773036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.773070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.773365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.773397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.773671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.773703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.773989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.774024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.774227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.774259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.774462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.774494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.774792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.774824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.217 qpair failed and we were unable to recover it. 00:27:22.217 [2024-11-20 16:20:22.775037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.217 [2024-11-20 16:20:22.775071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.775348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.775380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.775684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.775716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.775925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.775974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.776277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.776310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.776588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.776620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.776832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.776864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.777137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.777170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.777451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.777484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.777773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.778078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.778112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.778327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.778360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.778569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.778601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.778781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.778812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.779090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.779124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.779394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.779426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.779723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.779757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.779991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.780024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.780297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.780330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.780520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.780552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.780821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.780854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.781132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.781165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.781455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.781488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.781771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.781803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.782080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.782115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.782370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.782402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.782706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.782739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.783004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.783038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.783287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.783319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.783616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.783648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.783782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.783814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.784015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.784048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.784362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.784394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.784705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.784737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.785019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.785053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.785252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.785284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.785558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.785790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.218 [2024-11-20 16:20:22.785821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.218 qpair failed and we were unable to recover it. 00:27:22.218 [2024-11-20 16:20:22.786108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.786141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.786343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.786376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.786652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.786684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.786937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.786995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.787282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.787314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.787584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.787622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.787917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.787958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.788221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.788254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.788546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.788577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.788773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.788805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.789062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.789095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.789399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.789431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.789696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.789728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.789959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.789993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.790242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.790274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.790526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.790559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.790817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.790849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.791149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.791183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.791449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.791481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.791679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.791713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.791916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.791959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.792245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.792277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.792402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.792435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.792691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.792723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.793015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.793049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.793338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.793370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.793626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.793659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.793851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.793883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.794212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.794246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.794534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.794566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.794769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.794801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.794987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.795022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.795312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.795345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.795648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.795681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.795956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.795989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.796267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.796299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.796578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.219 [2024-11-20 16:20:22.796611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.219 qpair failed and we were unable to recover it. 00:27:22.219 [2024-11-20 16:20:22.796896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.796928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.797209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.797242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.797440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.797744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.797971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.798004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.798266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.798524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.798557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.798854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.798885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.799196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.799237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.799423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.799455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.799735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.799767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.800038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.800071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.800360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.800393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.800669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.800702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.800991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.801025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.801304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.801337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.801615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.801647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.801934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.801974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.802247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.802279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.802504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.802537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.802832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.802864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.803064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.803098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.803323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.803355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.803550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.803582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.803723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.803755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.804029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.804063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.804367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.804400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.804660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.804692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.804870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.804902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.805216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.805249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.805532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.805564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.805846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.805878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.806160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.806194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.806474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.806506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.806789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.806820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.807092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.807126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.807392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.807423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.807601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.220 [2024-11-20 16:20:22.807633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.220 qpair failed and we were unable to recover it. 00:27:22.220 [2024-11-20 16:20:22.807836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.807867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.808144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.808178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.808453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.808485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.808773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.808806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.809064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.809097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.809348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.809380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.809630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.809662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.809922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.809962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.810273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.810555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.810589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.810814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.810853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.811131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.811164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.811422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.811455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.811733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.811764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.811970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.812004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.812259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.812292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.812547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.812580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.812791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.812823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.813110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.813143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.813370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.813402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.813681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.813713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.813935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.813976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.814257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.814289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.814568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.814601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.814827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.814860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.815141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.815173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.815368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.815402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.815656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.815688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.815967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.816000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.816284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.816318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.816537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.816569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.816844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.816877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.817000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.221 [2024-11-20 16:20:22.817033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.221 qpair failed and we were unable to recover it. 00:27:22.221 [2024-11-20 16:20:22.817310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.817342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.817605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.817638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.817788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.817820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.818101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.818134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.818437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.818470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.818679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.818710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.818968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.819002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.819232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.819569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.819601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.819867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.819898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.820184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.820218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.820431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.820463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.820610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.820642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.820918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.820961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.821239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.821272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.821569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.821600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.821871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.821902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.822194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.822233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.822527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.822559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.822810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.822842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.823093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.823126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.823331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.823363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.823636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.823669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.823889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.823921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.824142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.824176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.824455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.824486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.824740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.824772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.824960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.824993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.825194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.825227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.825491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.825523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.825710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.825742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.825957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.825991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.826253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.826286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.826556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.826587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.826886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.826917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.827125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.827157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.827339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.827371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.222 [2024-11-20 16:20:22.827653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.222 [2024-11-20 16:20:22.827686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.222 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.827986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.828020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.828222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.828254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.828556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.828588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.828876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.828907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.829189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.829224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.829435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.829465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.829772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.829804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.830066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.830118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.830429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.830460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.830711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.830743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.830938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.830980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.831199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.831231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.831437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.831467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.831778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.831810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.832039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.832072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.832255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.832286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.832549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.832581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.832832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.832862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.833075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.833108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.833307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.833345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.833552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.833584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.833850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.833881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.834081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.834114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.834367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.834399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.834680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.834712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.834992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.835025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.835221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.835252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.835504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.835536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.835840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.835871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.836122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.836155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.836414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.836448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.836750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.836781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.837065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.837353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.837384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.837532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.837563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.837841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.837873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.838078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.838110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.223 qpair failed and we were unable to recover it. 00:27:22.223 [2024-11-20 16:20:22.838294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.223 [2024-11-20 16:20:22.838326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.838607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.838640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.838923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.838967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.839261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.839293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.839520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.839554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.839734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.839766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.840018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.840052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.840245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.840276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.840506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.840539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.840863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.840896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.841120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.841155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.841362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.841394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.841616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.841647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.841849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.841881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.842138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.842170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.842446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.842479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.842764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.842798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.843076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.843110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.843411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.843443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.843709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.843740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.843965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.844000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.844284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.844316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.844588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.844626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.844908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.844941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.845172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.845204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.845406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.845437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.845641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.845673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.845871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.845903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.846170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.846204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.846423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.846455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.846663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.846694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.846917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.846960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.847089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.847120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.847332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.847364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.847556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.847587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.847868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.847899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.224 [2024-11-20 16:20:22.848091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.224 [2024-11-20 16:20:22.848125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.224 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.848274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.848305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.848488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.848519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.848702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.848734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.849011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.849044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.849188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.849221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.849484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.849694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.849726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.849854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.849885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.850160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.850195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.850405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.850438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.850644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.850676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.850970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.851005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.851271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.851304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.851638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.851670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.851923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.851972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.852268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.852301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.852647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.852968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.853002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.853281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.853313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.853592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.853625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.853839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.853871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.854123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.854158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.854292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.854325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.854586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.854790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.854824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.855008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.855049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.855279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.855312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.855509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.855542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.855720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.855752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.856056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.856089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.856234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.856266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.856545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.856577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.225 [2024-11-20 16:20:22.856786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.225 [2024-11-20 16:20:22.856817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.225 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.857016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.857049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.857324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.857355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.857634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.857666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.857967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.858252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.858284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.858477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.858509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.858778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.858810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.859116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.859150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.859434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.859466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.859690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.859723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.859921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.859963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.860113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.860145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.860367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.860400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.860698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.860730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.860929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.860973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.861237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.861271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.861529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.861561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.861789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.861821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.862073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.862108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.862257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.862290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.862497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.862528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.862791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.862824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.863029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.863062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.863195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.863228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.863451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.863483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.863775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.863807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.863998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.864031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.864179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.864211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.864411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.864444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.864675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.864706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.864972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.865006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.865207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.865239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.865436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.865474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.865679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.865711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.865890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.865923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.866072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.866105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.866333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.866365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.226 qpair failed and we were unable to recover it. 00:27:22.226 [2024-11-20 16:20:22.866547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.226 [2024-11-20 16:20:22.866579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.866770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.866802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.867081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.867115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.867260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.867292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.867520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.867553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.867793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.868048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.868081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.868329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.868363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.868665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.868698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.868921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.868962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.869223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.869255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.869527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.869558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.869741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.869772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.870003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.870036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.870240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.870272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.870492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.870523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.870661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.870695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.870959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.870992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.871269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.871302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.871507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.871539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.871815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.871846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.872059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.872095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.872329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.872361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.872511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.872547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.872735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.872768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.872969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.873002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.873227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.873258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.873467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.873499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.873631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.873663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.873958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.874239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.874272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.874495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.874526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.874709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.874741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.874995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.875028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.875302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.875333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.875621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.875658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.875964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.875998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.227 qpair failed and we were unable to recover it. 00:27:22.227 [2024-11-20 16:20:22.876203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.227 [2024-11-20 16:20:22.876235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.876538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.876570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.876855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.876887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.877115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.877148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.877454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.877485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.877806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.877837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.878100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.878134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.878341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.878372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.878642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.878673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.878974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.879007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.879279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.879312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.879498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.879529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.879787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.879819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.880099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.880134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.880389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.880419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.880732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.880810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.881143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.881186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.881445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.881480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.881683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.881717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.881994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.882029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.882252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.882512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.882544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.882739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.882771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.883052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.883086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.883238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.883270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.883428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.883470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.883663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.883694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.883980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.884013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.884161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.884193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.884459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.884765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.884796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.885025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.885059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.885335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.885367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.885672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.885703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.885969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.886302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.886334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.886625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.886656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.886852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.886883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.228 qpair failed and we were unable to recover it. 00:27:22.228 [2024-11-20 16:20:22.887172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.228 [2024-11-20 16:20:22.887206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.887537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.887680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.887711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.887938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.887978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.888114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.888350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.888382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.888701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.888734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.888973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.889007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.889219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.889251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.889546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.889578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.889854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.889885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.890176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.890209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.890488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.890521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.890721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.890752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.890970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.891017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.891265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.891297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.891645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.891676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.891854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.891885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.892137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.892171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.892469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.892501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.892794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.892826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.893034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.893068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.893323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.893355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.893656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.893687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.893900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.893932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.894076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.894108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.894313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.894345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.894586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.894619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.894853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.894885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.895115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.895149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.895346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.895586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.895617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.895895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.895927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.896149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.896181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.896371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.896403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.896599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.896630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.896816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.896848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.897029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.229 [2024-11-20 16:20:22.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.229 qpair failed and we were unable to recover it. 00:27:22.229 [2024-11-20 16:20:22.897317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.897348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.897631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.897663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.897862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.897894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.898142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.898175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.898392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.898424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.898684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.898717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.899026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.899060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.899216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.899247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.899516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.899547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.899800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.899831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.900041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.900074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.900271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.900303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.900517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.900550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.900756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.900787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.900986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.901020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.901270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.901301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.901493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.901525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.901731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.901763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.902019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.902052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.902249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.902281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.902560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.902591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.902713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.902745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.902963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.902996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.903202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.903235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.903406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.903703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.903735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.904018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.904052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.904262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.904293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.904527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.904769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.904803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.905002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.905035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.905189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.905221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.905350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.905382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.230 [2024-11-20 16:20:22.905583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.230 [2024-11-20 16:20:22.905614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.230 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.905752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.905783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.905978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.906011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.906215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.906247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.906584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.906616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.906880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.906912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.907143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.907176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.907380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.907412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.907704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.907934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.907976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.908180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.908211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.908471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.908508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.908812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.908844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.909069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.909103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.909252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.909285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.909483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.909514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.909712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.909744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.909997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.910031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.910222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.910254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.910507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.910540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.910743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.910775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.911070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.911104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.911315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.911348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.911614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.911645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.911826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.911857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.912297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.912335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.912619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.912654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.912860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.912892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.913057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.913090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.913290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.913321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.913442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.913473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.913696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.913728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.913942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.913988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.914189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.914220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.914425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.914456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.914666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.914698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.914885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.914916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.915140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.915173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.915320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.231 [2024-11-20 16:20:22.915361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.231 qpair failed and we were unable to recover it. 00:27:22.231 [2024-11-20 16:20:22.915668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.915883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.915915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.916144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.916177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.916406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.916438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.916653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.916685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.916935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.916982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.917109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.917140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.917343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.917375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.917575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.917607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.917883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.917915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.918194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.918390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.918422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.918636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.918668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.918973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.919007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.919257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.919289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.919490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.919520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.919812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.919843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.920144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.920178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.920320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.920351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.920501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.920533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.920671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.920703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.920973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.921007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.921206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.921238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.921374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.921405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.921590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.921621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.921897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.921927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.922141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.922180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.922382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.922413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.922555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.922587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.922703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.922734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.923010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.923044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.923259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.923290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.923439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.923470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.923781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.923812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.924010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.924043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.924160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.924191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.924399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.924431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.924658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.232 [2024-11-20 16:20:22.924689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.232 qpair failed and we were unable to recover it. 00:27:22.232 [2024-11-20 16:20:22.924970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.925003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.925205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.925237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.925434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.925466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.925762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.925794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.925987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.926020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.926163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.926194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.926396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.926428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.926656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.926688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.926939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.927156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.927188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.927382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.927414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.927616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.927649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.927930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.927989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.928201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.928232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.928485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.928517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.928803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.928835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.929065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.929099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.929390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.929421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.929622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.929653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.929905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.929936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.930196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.930337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.930368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.930673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.930705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.930970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.931003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.931223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.931254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.931456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.931488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.931690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.931721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.931993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.932028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.932285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.932316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.932574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.932606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.932807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.933025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.933058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.933185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.933217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.933489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.933521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.933714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.933745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.933944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.933987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.934308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.934346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.934632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.233 [2024-11-20 16:20:22.934664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.233 qpair failed and we were unable to recover it. 00:27:22.233 [2024-11-20 16:20:22.934862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.934894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.935178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.935211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.935401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.935432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.935634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.935664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.935852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.935883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.936113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.936146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.936288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.936319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.936467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.936499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.936622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.936653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.936786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.936817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.937073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.937108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.937289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.937320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.937499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.937529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.937741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.937772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.938036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.938069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.938195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.938225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.938421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.938453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.938676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.938707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.938900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.938938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.939218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.939250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.939391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.939422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.939698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.939730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.939908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.939939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.940211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.940243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.940495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.940526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.940827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.940857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.941006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.941039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.941313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.941344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.941546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.941577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.941767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.941798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.941988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.942020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.942225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.942256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.942520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.942553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.942746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.942777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.234 [2024-11-20 16:20:22.943027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.234 [2024-11-20 16:20:22.943059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.234 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.943203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.943234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.943484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.943516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.943714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.943744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.943998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.944030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.944235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.944265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.944450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.944481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.944625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.944657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.944930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.944994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.945197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.945228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.945362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.945393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.945651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.945687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.945881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.945912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.946107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.946140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.946391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.946423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.946700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.947016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.947050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.947319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.947350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.947612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.947643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.947844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.947875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.948064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.948097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.948303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.948586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.948617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.948818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.948849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.949133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.949166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.949395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.949690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.949722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.949996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.950030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.950227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.950480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.950512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.950768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.950800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.950995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.951029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.951303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.951334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.951477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.951508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.951720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.951752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.952026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.952060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.235 [2024-11-20 16:20:22.952274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.235 [2024-11-20 16:20:22.952305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.235 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.952454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.952486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.952694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.952725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.952999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.953032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.953270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.953302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.953496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.953528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.953727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.953758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.953943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.953985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.954119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.954151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.954405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.954436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.954716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.954746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.954990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.955023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.955214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.955245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.955428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.955459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.955761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.955792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.956050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.956084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.956365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.956397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.956641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.956672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.956793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.956825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.957029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.957062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.957331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.957362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.957633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.957665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.957857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.957888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.958108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.958141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.958394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.958738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.958770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.958998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.959031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.959227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.959258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.959458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.959490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.959698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.959728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.959994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.960029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.960259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.960290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.960539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.960570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.960870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.960900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.961152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.961183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.961412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.961444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.961590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.961621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.961919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.961961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.236 qpair failed and we were unable to recover it. 00:27:22.236 [2024-11-20 16:20:22.962097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.236 [2024-11-20 16:20:22.962130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.962405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.962436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.962573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.962604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.962740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.962771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.963026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.963059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.963193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.963230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.963379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.963410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.963706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.963736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.963997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.964029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.964230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.964262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.964469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.964501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.964722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.964753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.964969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.965003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.965199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.965230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.965372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.965404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.965720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.965751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.965968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.966002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.966278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.966310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.966593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.966625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.966784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.966817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.967026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.967059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.967205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.967236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.967510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.967542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.967734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.967767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.968045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.968079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.968292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.968324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.968521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.968552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.968826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.968857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.969112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.969146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.969399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.969430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.969581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.969612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.969890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.969922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.970125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.970164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.970418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.970449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.970696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.970726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.971007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.971040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.971187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.971219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.971418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.971449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.237 [2024-11-20 16:20:22.971748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.237 [2024-11-20 16:20:22.971780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.237 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.972019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.972053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.972362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.972393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.972727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.972759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.973021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.973055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.973180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.973211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.973404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.973435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.973652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.973682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.973997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.974031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.974285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.974316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.974511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.974543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.974791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.974823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.975096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.975130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.975270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.975301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.975551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.975582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.975804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.975835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.976110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.976144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.976350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.976381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.976529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.976560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.976771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.976803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.977048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.977082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.977383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.977421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.977643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.977675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.978002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.978153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.978184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.978471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.978503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.978644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.978675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.978965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.978998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.979154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.979185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.979412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.979444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.979765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.979796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.979915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.979959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.980116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.980147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.980340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.980371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.980600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.980630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.980885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.980918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.981147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.981180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.981388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.981420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.981555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.238 [2024-11-20 16:20:22.981585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.238 qpair failed and we were unable to recover it. 00:27:22.238 [2024-11-20 16:20:22.981792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.981824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.982123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.982157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.982360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.982392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.982625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.982657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.982981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.983014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.983149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.983180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.983456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.983487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.983704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.983735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.984002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.984035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.984233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.984265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.984463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.984495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.984735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.984767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.985070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.985103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.985300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.985331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.985586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.985617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.985918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.985963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.986100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.986131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.986346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.986666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.986697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.986960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.986993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.987292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.987323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.987566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.987599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.987913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.987944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.988223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.988257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.988420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.988452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.988745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.988777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.989082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.989116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.989314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.989345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.989648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.989680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.989971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.990005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.990302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.990334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.990561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.990593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.990856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.239 [2024-11-20 16:20:22.990888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.239 qpair failed and we were unable to recover it. 00:27:22.239 [2024-11-20 16:20:22.991136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.991168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.991370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.991401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.991602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.991634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.991911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.991943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.992162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.992194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.992400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.992433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.992681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.992712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.992902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.992934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.993141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.993174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.993393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.993424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.993687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.993719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.993921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.993965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.994234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.994266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.994493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.994717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.994748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.994871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.994902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.995109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.995142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.995279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.995321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.995617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.995649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.995846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.995877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.996024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.996056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.996309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.996341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.996533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.996564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.996813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.996844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.997049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.997283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.997315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.997541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.997572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.997845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.997877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.998130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.998164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.998362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.998392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.998581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.998612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.998893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.998925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.999190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.999222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.999422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.999453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:22.999744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:22.999777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:23.000074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:23.000108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:23.000378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:23.000410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:23.000620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:23.000652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.240 qpair failed and we were unable to recover it. 00:27:22.240 [2024-11-20 16:20:23.000841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.240 [2024-11-20 16:20:23.000872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.001126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.001160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.001297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.001328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.001461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.001492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.001677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.001708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.001928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.001972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.002185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.002223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.002342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.002373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.002680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.002712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.002972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.003005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.003225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.003256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.003460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.003492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.003642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.003674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.003987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.004021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.004242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.004275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.004547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.004578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.004896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.004928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.005141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.005174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.005317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.005347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.005613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.005645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.005864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.005896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.006160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.006192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.006460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.006491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.006680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.006711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.006974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.007007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.007154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.007185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.007381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.007412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.007769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.007801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.008023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.008057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.008262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.008293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.008511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.008542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.008816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.008848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.008995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.009028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.009231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.009262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.009479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.009512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.009711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.009743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.009926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.009970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.010273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.010305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.241 [2024-11-20 16:20:23.010448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.241 [2024-11-20 16:20:23.010479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.241 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.010762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.010793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.011094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.011128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.011341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.011373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.011564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.011595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.011889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.011922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.012069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.012101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.012293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.012324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.012604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.012635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.012835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.012867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.013056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.013090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.013306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.013339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.013541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.013573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.013815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.013847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.014102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.014136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.014402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.014434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.014721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.014753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.015035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.015068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.015218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.015249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.015451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.015483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.015707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.015739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.016000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.016034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.016242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.016274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.016495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.016527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.016741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.016773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.017024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.017057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.017191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.017223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.017479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.017813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.017846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.018068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.018100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.018313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.018344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.018501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.018533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.018775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.018914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.019161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.019193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.019401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.019433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.019645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.019683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.019912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.019944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.020178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.242 [2024-11-20 16:20:23.020211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.242 qpair failed and we were unable to recover it. 00:27:22.242 [2024-11-20 16:20:23.020370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.020401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.020647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.020678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.020898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.020929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.021227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.021374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.021728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.021760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.022033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.022066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.022267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.022299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.022497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.022529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.022799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.022831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.023054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.023087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.023280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.023313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.023634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.023667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.023866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.023897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.024038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.024072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.024215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.024246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.024442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.024473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.024747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.024779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.024992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.025026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.025227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.025258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.025559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.025591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.025819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.025852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.026090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.026123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.026329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.026361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.026517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.026555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.026753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.026784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.027005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.027039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.027277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.027309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.243 [2024-11-20 16:20:23.027598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.243 [2024-11-20 16:20:23.027631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.243 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.027911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.027942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.028155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.028187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.028460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.028491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.028776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.028807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.029090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.029123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.029326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.029358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.029600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.029634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.029884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.029916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.030092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.030124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.030282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.030314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.030576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.030608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.030748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.031028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.031064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.031250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.031283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.031487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.031521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.031730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.031763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.031989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.032024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.032230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.032262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.032544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.032576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.032855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.032887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.033204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.033238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.033562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.033595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.033794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.033834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.034128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.034162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.034295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.034328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.034520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.034552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.527 [2024-11-20 16:20:23.034770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.527 [2024-11-20 16:20:23.034802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.527 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.034998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.035032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.035341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.035374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.035631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.035663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.035774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.035805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.036083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.036117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.036398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.036431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.036708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.036740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.036927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.036970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.037251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.037284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.037496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.037530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.037805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.037838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.038056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.038113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.038389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.038421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.038711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.038743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.038942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.039001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.039207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.039240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.039491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.039525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.039654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.039687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.039973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.040008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.040203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.040236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.040486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.040518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.040724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.040757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.041030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.041064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.041252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.041286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.041554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.041587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.041804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.041836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.042054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.042088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.042274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.042306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.042506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.042539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.042758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.042791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.043089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.043318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.043351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.043696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.043728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.043923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.043966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.044228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.044260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.044463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.044495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.044708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.044740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.044979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.528 [2024-11-20 16:20:23.045014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.528 qpair failed and we were unable to recover it. 00:27:22.528 [2024-11-20 16:20:23.045220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.045252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.045383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.045416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.045640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.045673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.045820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.045851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.046073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.046106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.046326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.046358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.046611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.046642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.046894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.046927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.047191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.047224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.047427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.047458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.047682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.047715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.047928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.047974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.048137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.048169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.048316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.048348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.048580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.048612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.048807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.048840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.049102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.049137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.049391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.049423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.049563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.049595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.049849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.049881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.050071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.050104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.050307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.050340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.050484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.050515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.050801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.050834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.051138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.051172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.051307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.051346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.051489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.051520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.051824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.051857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.052005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.052038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.052204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.052234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.052447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.052480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.052745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.052776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.052996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.053030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.053249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.053282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.053467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.053499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.053692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.053724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.053920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.053962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.054103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.529 [2024-11-20 16:20:23.054135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.529 qpair failed and we were unable to recover it. 00:27:22.529 [2024-11-20 16:20:23.054335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.054366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.054602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.054633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.054844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.054876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.055107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.055141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.055265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.055298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.055442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.055475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.055753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.055994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.056028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.056223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.056255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.056451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.056484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.056757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.056788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.057004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.057038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.057171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.057203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.057404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.057436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.057578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.057852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.057885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.058166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.058200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.058404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.058436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.058576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.058607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.058746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.058779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.059034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.059068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.059272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.059304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.059607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.059638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.059751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.059783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.059976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.060010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.060171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.060202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.060363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.060395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.060664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.060697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.060989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.061024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.061170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.061203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.061455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.061487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.061750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.061782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.062042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.062076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.062273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.062306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.062428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.062460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.062660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.062691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.062875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.062906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.063136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.530 [2024-11-20 16:20:23.063170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.530 qpair failed and we were unable to recover it. 00:27:22.530 [2024-11-20 16:20:23.063373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.063630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.063661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.063865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.063896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.064034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.064069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.064222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.064254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.064432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.064464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.064760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.064793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.065037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.065073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.065357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.065390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.065545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.065578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.065856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.066167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.066201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.066345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.066377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.066590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.066621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.066872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.066903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.067104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.067138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.067264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.067296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.067511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.067543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.067845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.067878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.068014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.068048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.068242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.068273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.068423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.068455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.068713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.068745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.069016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.069051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.069258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.069291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.069427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.069458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.069695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.069726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.069932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.069975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.070117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.070148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.070401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.070433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.070659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.070691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.070929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.070990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.531 qpair failed and we were unable to recover it. 00:27:22.531 [2024-11-20 16:20:23.071192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.531 [2024-11-20 16:20:23.071224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.071430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.071462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.071810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.072085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.072119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.072355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.072388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.072579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.072611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.072887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.072919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.073091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.073124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.073329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.073362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.073585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.073617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.073840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.073872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.074149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.074182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.074318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.074356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.074609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.074642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.074784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.075115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.075149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.075382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.075414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.075658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.075689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.075970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.076003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.076138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.076169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.076305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.076337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.076639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.076672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.076868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.076901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.077068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.077101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.077258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.077503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.077535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.077804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.077837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.078039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.078072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.078276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.078307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.078501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.078531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.078759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.078794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.079077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.079111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.079388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.079422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.079616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.079648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.079893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.079925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.080205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.080238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.080494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.080526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.080780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.080812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.532 qpair failed and we were unable to recover it. 00:27:22.532 [2024-11-20 16:20:23.081078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.532 [2024-11-20 16:20:23.081112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.081320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.081358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.081595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.081800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.081832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.082037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.082071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.082345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.082377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.082652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.082683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.082980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.083014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.083262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.083296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.083550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.083584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.083753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.084027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.084062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.084211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.084244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.084536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.084568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.084718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.084751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.084986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.085021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.085257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.085436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.085469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.085755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.085788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.086045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.086243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.086274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.086533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.086566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.086842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.087390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.087424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.087571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.087603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.087836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.087868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.088124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.088158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.088439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.088478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.088625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.088657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.088944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.088989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.089299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.089333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.089602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.089634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.089913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.089946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.090166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.090199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.090464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.090496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.090700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.090733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.091001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.091035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.533 [2024-11-20 16:20:23.091241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.533 [2024-11-20 16:20:23.091274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.533 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.091468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.091500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.091783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.091816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.092040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.092074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.092357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.092390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.092701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.092735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.092870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.092902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.093036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.093070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.093290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.093322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.093658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.093690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.093881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.093914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.094135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.094169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.094426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.094458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.094667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.094701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.094993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.095028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.095304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.095337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.095649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.095680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.095891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.095924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.096241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.096275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.096505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.096710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.096742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.096942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.096991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.097194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.097227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.097376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.097408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.097592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.097624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.097866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.097898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.098111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.098144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.098350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.098382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.098695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.098727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.098972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.099007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.099132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.099163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.099400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.099433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.099617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.099648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.099792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.099823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.100083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.100119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.100325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.100357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.100509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.100543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.100799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.100832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.100968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.534 [2024-11-20 16:20:23.101002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.534 qpair failed and we were unable to recover it. 00:27:22.534 [2024-11-20 16:20:23.101210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.101241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.101460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.101490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.101708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.101739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.101855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.101886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.102031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.102065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.102251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.102284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.102495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.102528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.102723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.102756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.102989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.103023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.103179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.103353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.103384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.103651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.103684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.103973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.104007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.104282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.104314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.104649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.104682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.104932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.104977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.105184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.105216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.105354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.105386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.105604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.105637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.105835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.105873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.106130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.106163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.106365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.106399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.106613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.106646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.106784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.106814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.106970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.107003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.107209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.107241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.107357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.107389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.107532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.107563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.107695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.107727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.108008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.108043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.108236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.108269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.108481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.108513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.108784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.108816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.108975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.109009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.109226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.109258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.109509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.109542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.109691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.109723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.109927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.109973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.535 qpair failed and we were unable to recover it. 00:27:22.535 [2024-11-20 16:20:23.110175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.535 [2024-11-20 16:20:23.110208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.110408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.110440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.110762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.110795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.111008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.111041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.111259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.111291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.111477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.111510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.111846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.111878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.112140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.112174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.112397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.112435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.112725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.112758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.113030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.113065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2895082 Killed "${NVMF_APP[@]}" "$@" 00:27:22.536 [2024-11-20 16:20:23.113286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.113321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.113514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.113546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.113798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.113830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:22.536 [2024-11-20 16:20:23.114141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.114176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.114398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.114434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:22.536 [2024-11-20 16:20:23.114711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.114745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.536 [2024-11-20 16:20:23.114934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.114997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.115153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.536 [2024-11-20 16:20:23.115187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.115410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.115443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.115780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.115962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.115997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.116193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.116226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.116457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.116489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.116690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.116723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.116981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.117016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.117168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.117201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.118873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.118935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.119214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.119250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.120907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.120984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.121239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.121271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 [2024-11-20 16:20:23.121525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.536 [2024-11-20 16:20:23.121559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.536 qpair failed and we were unable to recover it. 00:27:22.536 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2895819 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2895819 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2895819 ']' 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.537 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.537 [2024-11-20 16:20:23.124971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.125026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.125339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.125367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.125571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.125598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.125800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.125827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.126036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.126065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.126339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.126368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.126519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.126550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.126763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.126791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.127051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.127079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.127203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.127230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.127376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.127401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.127639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.127874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.127900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.128147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.128173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.128388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.128414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.128699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.128727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.128916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.128943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.129119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.129148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.129290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.129317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.129513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.129539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.129844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.130055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.130083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.130349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.130375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.130599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.130627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.130822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.130850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.131089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.131117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.131246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.131273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.131472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.131499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.131639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.131666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.131914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.131941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.132177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.132208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.132444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.132472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.132686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.132712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.537 [2024-11-20 16:20:23.132900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.537 [2024-11-20 16:20:23.132928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.537 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.133138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.133168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.133291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.133320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.133525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.133553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.133757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.133784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.133985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.134014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.134150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.134177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.134321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.134350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.134532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.134560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.134795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.134818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.135072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.135097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.135288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.135312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.135523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.135548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.135733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.135756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.135943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.135977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.136157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.136183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.136371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.136393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.136669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.136699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.136898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.136922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.137052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.137076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.137244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.137268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.137454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.137477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.137712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.137735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.137849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.137873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.138046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.138073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.138257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.138281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.138424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.138447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.138688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.138712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.138965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.138991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.139195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.139450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.139477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.139766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.139790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.139973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.139997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.140125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.140147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.140353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.140377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.140650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.140674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.140853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.140876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.144974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.145031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.145255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.538 [2024-11-20 16:20:23.145278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.538 qpair failed and we were unable to recover it. 00:27:22.538 [2024-11-20 16:20:23.145537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.145561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.145692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.145854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.145878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.146064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.146087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.146210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.146233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.146400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.146436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.146604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.146625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.146744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.146765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.146889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.147088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.147112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.147280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.147304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.147492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.147514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.147621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.147642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.147774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.147797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.148023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.148047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.148208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.148230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.148376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.148553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.148576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.148705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.148728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.148906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.148928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.149112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.149383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.149407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.149604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.149626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.149733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.149756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.149958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.149984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.150090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.150113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.150235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.150259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.150451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.150478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.150599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.150622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.150881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.150905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.151038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.151062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.151169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.151191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.151446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.151470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.151578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.151601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.151714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.151736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.151907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.151929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.152063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.152087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.152340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.539 [2024-11-20 16:20:23.152363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.539 qpair failed and we were unable to recover it. 00:27:22.539 [2024-11-20 16:20:23.152525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.152547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.152661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.152686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.152794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.152816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.152923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.152946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.153158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.153181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.153283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.153308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.153432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.153456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.153628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.153650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.153767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.153789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.155984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.156021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.156280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.156296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.156456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.156472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.156622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.156637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.156738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.156753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.156914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.156930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.157175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.157197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.157314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.157331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.157437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.157453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.157615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.157631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.157728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.157744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.157927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.157943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.158924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.158941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.159114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.159131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.159236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.159252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.160270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.160307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.160591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.160608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.160753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.160770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.160862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.160882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.161064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.161081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.540 [2024-11-20 16:20:23.161193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.540 [2024-11-20 16:20:23.161209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.540 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.161313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.161330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.161429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.161449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.161537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.161553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.161709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.161726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.161812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.161827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.161900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.161915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.162892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.162907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.163903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.163918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.164938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.164963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.165037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.165052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.165147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.165160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.165240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.165254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.541 [2024-11-20 16:20:23.165329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.541 [2024-11-20 16:20:23.165343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.541 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.165428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.165441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.165580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.165594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.165661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.165674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.165755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.165769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.166943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.166967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.167923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.167938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.168911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.168926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.542 [2024-11-20 16:20:23.169775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.542 qpair failed and we were unable to recover it. 00:27:22.542 [2024-11-20 16:20:23.169861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.169875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.169963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.169978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.170836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.170993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.171887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.171901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.172978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.172994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.543 qpair failed and we were unable to recover it. 00:27:22.543 [2024-11-20 16:20:23.173822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.543 [2024-11-20 16:20:23.173835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.173988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.174945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.174987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175459] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:27:22.544 [2024-11-20 16:20:23.175509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175521] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.544 [2024-11-20 16:20:23.175527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.175799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.175813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.176981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.176998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.177909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.544 [2024-11-20 16:20:23.177989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.544 [2024-11-20 16:20:23.178007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.544 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.178835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.178995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.179902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.179992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.180975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.180991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.545 qpair failed and we were unable to recover it. 00:27:22.545 [2024-11-20 16:20:23.181667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.545 [2024-11-20 16:20:23.181683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.181766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.181781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.181855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.181870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.181967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.181983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.182915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.182931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.183919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.184882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.184900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.546 [2024-11-20 16:20:23.185845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.546 [2024-11-20 16:20:23.185868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.546 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.186900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.186985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.187900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.187922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.188097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.188348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.188471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.188634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.188767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.188871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.188989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.189971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.189990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.190136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.190153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.190245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.190262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.190339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.190356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.190521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.190538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.190615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.190632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.190779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.547 [2024-11-20 16:20:23.190856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.547 qpair failed and we were unable to recover it. 00:27:22.547 [2024-11-20 16:20:23.191021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.191878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.191977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.192922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.192939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.193836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.194922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.194942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.548 qpair failed and we were unable to recover it. 00:27:22.548 [2024-11-20 16:20:23.195865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.548 [2024-11-20 16:20:23.195885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.195994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.196865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.196995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.197972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.197993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.198894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.198918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.199924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.199945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.200101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.200122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.200277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.200298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.200376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.200396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.200481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.200500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.200596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.200617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.549 [2024-11-20 16:20:23.200717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.549 [2024-11-20 16:20:23.200738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.549 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.200825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.200846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.200946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.200976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.201974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.201997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.202098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.202120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.202216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.202236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.202392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.202413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.202575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.202596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.202766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.202786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.202884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.202906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.203906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.203926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.204128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8aaf0 is same with the state(6) to be set 00:27:22.550 [2024-11-20 16:20:23.204354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.204428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.204576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.204614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.204732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.204765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.204914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.204966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.205190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.205223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.205329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.205361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.205484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.205516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.205699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.550 [2024-11-20 16:20:23.205732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.550 qpair failed and we were unable to recover it. 00:27:22.550 [2024-11-20 16:20:23.205858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.205890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.206905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.206928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.207104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.207181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.207327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.207363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.207490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.207522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.207638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.207670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.207781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.207814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.207935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.207986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.208090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.208117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.208231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.208254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.208438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.208614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.208637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.208751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.208774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.208932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.209161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.209185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.209276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.209305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.209414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.209437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.209536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.209560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.209718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.209742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.209900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.209923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.210098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.210123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.210226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.210250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.210406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.210429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.210597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.210620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.210787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.210810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.210921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.210944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.211069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.211192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.211322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.211705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.551 [2024-11-20 16:20:23.211855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.551 qpair failed and we were unable to recover it. 00:27:22.551 [2024-11-20 16:20:23.211972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.212006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.212119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.212152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.212260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.212292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.212435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.212467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.212593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.212626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.212810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.212841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.213061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.213096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.213285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.213317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.213433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.213465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.213699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.213890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.213932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.214929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.214993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.215118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.215141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.215308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.215339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.215464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.215496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.215689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.215721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.215921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.215968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.216082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.216114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.216236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.216269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.216403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.216435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.216619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.216652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.216767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.216799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.216927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.216971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.217093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.217124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.217233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.217266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.217419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.217535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.217566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.217675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.217706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.217826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.217858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.218098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.218131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.218307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.218339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.218463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.552 [2024-11-20 16:20:23.218500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.552 qpair failed and we were unable to recover it. 00:27:22.552 [2024-11-20 16:20:23.218684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.218716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.218924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.218966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.219165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.219196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.219376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.219408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.219534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.219566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.219696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.219727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.219843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.219873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.219981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.220013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.220186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.220218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.220321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.220353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.220538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.220570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.220691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.220722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.220848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.220879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.221100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.221136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.221244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.221275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.221388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.221419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.221552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.221585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.221797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.221829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.221971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.222106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.222246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.222388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.222537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.222702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.222857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.222890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.223026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.223059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.223169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.223210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.223402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.223434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.223545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.223577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.223708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.223741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.223855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.223887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.224010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.224043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.224156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.224188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.224312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.224343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.224456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.224488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.224668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.224700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.224875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.225039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.553 [2024-11-20 16:20:23.225072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.553 qpair failed and we were unable to recover it. 00:27:22.553 [2024-11-20 16:20:23.225190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.225222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.225331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.225364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.225483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.225517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.225622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.225654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.225831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.226055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.226089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.226201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.226233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.226367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.226522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.226554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.226686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.226718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.226919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.226958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.227071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.227103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.227206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.227237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.227349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.227380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.227535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.227567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.227741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.227811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.228095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.228137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.228259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.228292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.228417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.228449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.228565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.228595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.228704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.228735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.228893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.228923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.229061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.229094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.229222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.229254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.229452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.229484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.229696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.229727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.229836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.229867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.230077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.230110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.230239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.230279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.230425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.230455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.230586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.230742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.230775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.230979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.231012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.554 [2024-11-20 16:20:23.231238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.554 [2024-11-20 16:20:23.231270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.554 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.231378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.231411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.231531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.231564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.231696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.231727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.231850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.231881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.231996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.232136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.232303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.232446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.232592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.232796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.232931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.232971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.233087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.233119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.233265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.233295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.233420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.233451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.233566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.233598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.233721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.233752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.233864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.233896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.234075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.234234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.234373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.234530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.234718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.234879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.234988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.235022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.235130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.235162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.235290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.235329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.235447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.235479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.235611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.235644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.235755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.235785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.237255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.237308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.237454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.237487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.237599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.237632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.239115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.239165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.239306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.239342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.239532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.239572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.240930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.240996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.241192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.555 [2024-11-20 16:20:23.241225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.555 qpair failed and we were unable to recover it. 00:27:22.555 [2024-11-20 16:20:23.241332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.241363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.241463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.241493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.241629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.241660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.241785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.241817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.241928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.241973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.242090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.242120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.242302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.242334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.242450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.242480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.242609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.242640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.242835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.242866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.242980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.243013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.243157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.243190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.243373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.243405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.243584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.243623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.243743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.243775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.243891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.243922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.244065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.244096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.244277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.244308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.244420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.244452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.244597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.244627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.244815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.244847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.244962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.244995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.245122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.245154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.245277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.245308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.245549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.245593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.245748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.245865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.245897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.246024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.246057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.246169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.246201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.246318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.246349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.246469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.246501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.246619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.246650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.246850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.246882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.247010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.247043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.247213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.247244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.247373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.247404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.247580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.247611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.247717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.247747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.556 qpair failed and we were unable to recover it. 00:27:22.556 [2024-11-20 16:20:23.247861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.556 [2024-11-20 16:20:23.247893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.248955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.248988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.249168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.249199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.249443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.249474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.249603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.249633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.249751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.249781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.249904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.249935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.250055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.250094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.250277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.250308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.250430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.250461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.250582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.250612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.250762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.250934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.250978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.251091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.251122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.251232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.251263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.251456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.251486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.251607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.251638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.251751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.251783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.251906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.251936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.252064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.252095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.252199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.252230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.252368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.252399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.252593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.252625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.252737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.252767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.252878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.252909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.253042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.253075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.253279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.253305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.256113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.256148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.256389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.256420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.256602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.256634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.256808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.256839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.257033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.257065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.557 [2024-11-20 16:20:23.257261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.557 [2024-11-20 16:20:23.257292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.557 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.257409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.257438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.257559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.257596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.257713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.257743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.257962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.257994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.258158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.258354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.258385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.258496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.258533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.258719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.258750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.258868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.258900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.259186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.259219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.259335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.259366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.259488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.259518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.259645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.259676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.259849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.259880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.260012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.260045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.260172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.260202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.260318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.260350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.260462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.260492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.260668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.260698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.260885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.260917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.261149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.261195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.261316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.261350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.261479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.261511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.261626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.261665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.261775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.261807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.261912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.261943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.262127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.262158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.262242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.558 [2024-11-20 16:20:23.262299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.262330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.262451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.262482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.262666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.262698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.262829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.262861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.263081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.263232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.263373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.263517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.263662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.558 [2024-11-20 16:20:23.263808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.558 qpair failed and we were unable to recover it. 00:27:22.558 [2024-11-20 16:20:23.263923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.263976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.264111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.264142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.264314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.264466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.264498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.264678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.264710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.264825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.264856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.265035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.265069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.265180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.265211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.265330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.265362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.265600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.265632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.265807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.265839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.265970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.266004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.266142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.266175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.266286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.266318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.266426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.266457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.266693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.266725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.266848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.266880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.267000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.267041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.267164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.267197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.267382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.267414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.267535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.267567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.267762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.267794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.267990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.268023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.268271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.268303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.268470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.268657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.268689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.268815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.268848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.268985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.269017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.269211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.269439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.269472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.269571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.269603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.269745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.269778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.269972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.270005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.270186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.559 [2024-11-20 16:20:23.270218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.559 qpair failed and we were unable to recover it. 00:27:22.559 [2024-11-20 16:20:23.270331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.270362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.270554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.270586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.270717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.270749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.270864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.270896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.271193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.271228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.271424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.271457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.271576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.271609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.271730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.271763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.271892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.271925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.272182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.272216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.272338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.272372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.272508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.272542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.272648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.272682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.272805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.272837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.273042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.273078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.273261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.273296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.273422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.273457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.273628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.273672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.273852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.273885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.273998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.274030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.274139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.274172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.274349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.274382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.274575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.274607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.274725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.274764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.275020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.275053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.275168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.275201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.275377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.275409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.275513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.275546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.275662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.275694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.275862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.275895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.276043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.276077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.276276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.276308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.276449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.276648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.276680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.276800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.276961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.276995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.277129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.277162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.277291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-11-20 16:20:23.277323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-11-20 16:20:23.277429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.277462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.277566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.277598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.277713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.277745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.277924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.277965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.278187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.278220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.278333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.278364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.278561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.278594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.278709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.278742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.278868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.278899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.279090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.279122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.279324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.279356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.279582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.279614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.279827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.279875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.280042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.280076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.280204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.280237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.280353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.280383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.280580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.280613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.280731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.280762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.280936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.280979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.281117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.281148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.281327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.281359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.281571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.281602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.281710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.281743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.281864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.281896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.282015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.282047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.282239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.282271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.282400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.282431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.282601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.282633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.282897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.282928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.283185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.283218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.283333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.283364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.283479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.283511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.283628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.283659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.283834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.283865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.283981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.284015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.284131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.284162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.284284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.284316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.284432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-11-20 16:20:23.284464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-11-20 16:20:23.284643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.284674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.284955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.284992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.285146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.285179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.285434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.285467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.285642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.285673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.285887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.285919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.286146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.286221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.286365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.286399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.286589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.286620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.286829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.286860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.286973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.287007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.287253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.287286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.287403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.287434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.287602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.287633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.287808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.287839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.288064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.288097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.288221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.288252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.288427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.288458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.288593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.288625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.288752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.288783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.288976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.289010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.289187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.289218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.289487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.289522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.289693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.289724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.289833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.289864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.290048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.290081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.290292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.290323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.290439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.290469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.290597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.290632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.290797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.290971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.291004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.291107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.291138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.291350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.291380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.291489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.291520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.291699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.291729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.291839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.291870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.291973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.292005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.292122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.292153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 16:20:23.292281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-11-20 16:20:23.292312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.292481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.292512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.292688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.292720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.292831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.292863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.292989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.293022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.293129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.293160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.293375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.293407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.293515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.293720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.293752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.293921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.293964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.294187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.294219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.294462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.294495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.294719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.294751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.294852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.294884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.295143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.295176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.295366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.295399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.295640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.295672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.295845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.295882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.296061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.296094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.296222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.296254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.296464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.296496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.296756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.296788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.296967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.297001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.297195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.297227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.297328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.297359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.297532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.297564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.297752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.297783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.297974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.298007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.298126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.298157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.298258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.298290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.298553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.298585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.298763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.298795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.298966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.299000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.299120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.299150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.299331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.299363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 16:20:23.299603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-11-20 16:20:23.299636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.299805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.299836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.299963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.299996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.300185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.300217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.300468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.300498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.300603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.300634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.300816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.300847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.300974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.301007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.301131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.301164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.301443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.301482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.301663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.301696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.301867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.301898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.302154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.302187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.302315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.302346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.302516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.302547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.302720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.302752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.302962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.302997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.303181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.303215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.303391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.303424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.303594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.303628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.303751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.303783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.303967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.304001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.304130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.304130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.564 [2024-11-20 16:20:23.304160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.564 [2024-11-20 16:20:23.304163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 [2024-11-20 16:20:23.304171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.564 [2024-11-20 16:20:23.304181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.564 [2024-11-20 16:20:23.304187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.304276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.304307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.304434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.304465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.304584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.304616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.304789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.304821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.304998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.305239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.305270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.305450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.305480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.305596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.305626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.305796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.305828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.305840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:22.564 [2024-11-20 16:20:23.305967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:22.564 [2024-11-20 16:20:23.306072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.306102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 [2024-11-20 16:20:23.306038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:22.564 [2024-11-20 16:20:23.306040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.306284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.306325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 16:20:23.306449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-11-20 16:20:23.306481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.306601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.306633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.306820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.306852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.307043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.307260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.307292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.307403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.307433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.307605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.307636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.307762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.307967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.307999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.308265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.308297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.308492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.308525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.308764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.308935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.308977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.309154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.309186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.309306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.309536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.309568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.309697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.309729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.309900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.309931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.310128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.310162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.310401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.310433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.310632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.310664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.310840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.310871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.310973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.311008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.311267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.311299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.311486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.311517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.311700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.311731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.311972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.312011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.312188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.312221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.312357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.312389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.312629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.312661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.312842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.312874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.312992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.313025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.313219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.313251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.313373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.313600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.313631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.313745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.313777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.313989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.314021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.314199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.314231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-11-20 16:20:23.314339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-11-20 16:20:23.314370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.314614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.314646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.314898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.314931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.315077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.315109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.315349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.315381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.315512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.315544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.315717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.315750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.315929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.315971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.316185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.316219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.316461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.316494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.316699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.316731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.316872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.316903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.317115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.317149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.317420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.317452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.317578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.317610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.317780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.317811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.318103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.318137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.318254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.318285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.318529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.318560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.318842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.318874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.319091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.319124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.319309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.319342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.319470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.319502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.319608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.319640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.319761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.319793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.319995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.320028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.320166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.320199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.320375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.320407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.320655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.320688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.320904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.320965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.321236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.321271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.321382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.321416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.321634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.321668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.321931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.321976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.322224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.322256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.322439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.322472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.322655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.322689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.322801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.322832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.322960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-11-20 16:20:23.322993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-11-20 16:20:23.323140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.323172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.323343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.323375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.323552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.323585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.323702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.323743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.323942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.323985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.324197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.324230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.324496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.324530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.324722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.324754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.324944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.324987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.325106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.325139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.325382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.325415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.325625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.325658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.325826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.325859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.326028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.326300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.326476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.326508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.326760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.326792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.326918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.326958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.327141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.327174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.327298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.327331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.327456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.327490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.327605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.327638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.327831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.327864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.328105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.328140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.328354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.328387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.328664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.328698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.328969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.329004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.329190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.329222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.329395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.329426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.329675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.329707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.329943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.330019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.330224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.330269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.330468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.330501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.330685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.330908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.330938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.331230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.331263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.567 [2024-11-20 16:20:23.331552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.567 [2024-11-20 16:20:23.331583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.567 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.331777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.331808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.331941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.331984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.332190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.332220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.332457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.332488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.332620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.332650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.332910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.332941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.333120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.333160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.333296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.333326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.333614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.333645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.333816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.333847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.333969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.334002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.334214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.334246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.334421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.334453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.334644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.334676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.334885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.334916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.335165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.335311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.335352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.335550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.335581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.335745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.335776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.335903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.335933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.336124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.336156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.336278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.336313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.336575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.336606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.336742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.336774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.336967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.337000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.337105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.337136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.337317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.337347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.337554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.337586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.337712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.568 [2024-11-20 16:20:23.337925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.568 [2024-11-20 16:20:23.337965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.568 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.338101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.338133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.338304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.338333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.338507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.338539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.338655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.338964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.338996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.339118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.339149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.339336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.339366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.339543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.339574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.339814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.339845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.340089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.340122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.340303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.340333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.340516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.340547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.340785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.340816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.340931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.340972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.341210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.341245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.341386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.341417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.341596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.341628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.341805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.342080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.342113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.342367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.342399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.342521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.342553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.342817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.342849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.343021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.343054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.343179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.343211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-11-20 16:20:23.343473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-11-20 16:20:23.343505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.343641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.343672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.343909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.343941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.344137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.344168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.344408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.344440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.344623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.344655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.344920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.344962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.345143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.345174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.345368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.345400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.345649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.345680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.345796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.345827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.345943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.345987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.346266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.346297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.346535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.346567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.346749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.346781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.346934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.346977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.347097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.347129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.347311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.347342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.347530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.347562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.347758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.347790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.348122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.348174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.348396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.348429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.348642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.348674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.348901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.348934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.349134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.349167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.349347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.349380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.349645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.349677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.349859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.349892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.350180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.350214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.350351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.350383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.350570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.350603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.350775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.350806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.350922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.350965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.351079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.351118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.351259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.351291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.351531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.351564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.351688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.351721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.351968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.844 [2024-11-20 16:20:23.352002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.844 qpair failed and we were unable to recover it. 00:27:22.844 [2024-11-20 16:20:23.352144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.352175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.352373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.352406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.352584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.352617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.352760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.352791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.352914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.352945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.353190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.353222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.353425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.353457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.353723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.353755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.353887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.353917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.354066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.354099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.354338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.354370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.354561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.354593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.354849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.354881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.354991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.355023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.355210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.355243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.355492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.355524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.355724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.355757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.355998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.356032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.356241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.356273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.356485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.356518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.356736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.356767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.357046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.357080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.357304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.357344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.357542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.357574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.357764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.357796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.358036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.358070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.358319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.358351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.358538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.358569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.358808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.358840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.359107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.359140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.359378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.359409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.359594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.359626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.359815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.359847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.360022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.360055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.360167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.360199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.360314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.360347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.360566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.360597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.360733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.845 [2024-11-20 16:20:23.360764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.845 qpair failed and we were unable to recover it. 00:27:22.845 [2024-11-20 16:20:23.360977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.361010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.361190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.361221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.361467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.361499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.361760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.361793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.362063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.362097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.362235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.362266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.362468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.362499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.362765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.362796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.363005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.363039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.363228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.363260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.363387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.363419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.363609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.363640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.363771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.363803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.363923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.363968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.364208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.364240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.364432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.364462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.364594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.364624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.364744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.364775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.365006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.365039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.365217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.365247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.365364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.365394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.365635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.365666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.365876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.365908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.366043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.366075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.366245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.366280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.366474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.366504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.366705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.366735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.366900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.366931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.367249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.367280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.367455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.367486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.367748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.367778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.367981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.368015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.368128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.368159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.368402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.368433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.368646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.368675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.368859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.368889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.369068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.369098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.846 [2024-11-20 16:20:23.369220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.846 [2024-11-20 16:20:23.369251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.846 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.369357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.369387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.369641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.369673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.369863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.369894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.370029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.370061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.370249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.370282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.370585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.370616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.370827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.370857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.371040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.371073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.371245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.371275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.371449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.371480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.371652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.371684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.371873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.371903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.372151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.372184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.372371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.372404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.372608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.372641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.372879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.372911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.373114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.373147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.373379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.373412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.373672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.373704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.373894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.373926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.374047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.374079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.374206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.374238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.374424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.374456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.374701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.374732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.374914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.374946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.375146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.375176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.375302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.375339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.375515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.375547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.375740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.375770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.375877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.375907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.376104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.376136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.376370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.847 [2024-11-20 16:20:23.376400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.847 qpair failed and we were unable to recover it. 00:27:22.847 [2024-11-20 16:20:23.376529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.376559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.376735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.376766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.376934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.376977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.377181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.377212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.377399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.377430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.377666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.377696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.377830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.377860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.378066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.378100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.378307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.378339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.378516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.378547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.378723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.378753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.378959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.378991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.379115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.379145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.379321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.379353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.379531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.379563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.379752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.379782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.379961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.379993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.380193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.380224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.380481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.380513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.380695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.380726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.380990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.381022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.381323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.381355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.381524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.381555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.381678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.381708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.381841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.381871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.382119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.382151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.382281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.382312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.382495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.382525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.382729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.382760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.382959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.382991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.383178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.383210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.383381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.383413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.383599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.383631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.383814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.383845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.383980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.384017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.384169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.384200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.384325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.384354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.848 [2024-11-20 16:20:23.384534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.848 [2024-11-20 16:20:23.384563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.848 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.384817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.384847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.384962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.384995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.385259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.385290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.385461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.385491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.385592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.385621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.385740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.385770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.385904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.385933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.386125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.386155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.386341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.386372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.386560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.386592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.386846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.386878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.387047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.387080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.387255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.387287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.387464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.387494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.387611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.387642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.387879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.387910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.388097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.388130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.388277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.388514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.388544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.388671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.388701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.388966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.388998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.389185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.389482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.389514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.389698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.389729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.389911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.389943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.390131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.390163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.390289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.390320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.390432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.390463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.390601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.390787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.390817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.391017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.391048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.391183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.391214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.391387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.391418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.391546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.391576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.391714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.391744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.392009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.392042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.392161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.849 [2024-11-20 16:20:23.392198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.849 qpair failed and we were unable to recover it. 00:27:22.849 [2024-11-20 16:20:23.392393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.392425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.392613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.392642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.392845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.392875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.393066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.393097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.393385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.393417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.393557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.393589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.393723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.393755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.393924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.393961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.394171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.394202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.394439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.394470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.394582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.394611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.394873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.394904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.395099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.395131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.395330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.395362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.395547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.395578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.395853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.396068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.396101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.396292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.396322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.396436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.396467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.396640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.396671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.396911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.396942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.397088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.397119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.397354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.397384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.397589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.397620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.397820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.397851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.397986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.398017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.398150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.398182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.398363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.398394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.398593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.398624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.398811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.398842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.399047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.399080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.399190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.399221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.399477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.399509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.399689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.399721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.399900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.399931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.400225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.400256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.400458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.400490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.400610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.850 [2024-11-20 16:20:23.400640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.850 qpair failed and we were unable to recover it. 00:27:22.850 [2024-11-20 16:20:23.400848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.400879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.401159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.401198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.401462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.401493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.401628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.401659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.401783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.401813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.401957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.401989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.402158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.402190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.402391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.402594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.402626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.402798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.402829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.403019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.403050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.403221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.403251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.403419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.403449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.403617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.403647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.403881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.403911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.404098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.404161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.404314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.404363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.404591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.404625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.404759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.404791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.405001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.405049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.405228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.405261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.405454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.405487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.405726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.405756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.405935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.406173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.406204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.406410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.406440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.406691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.406722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.406857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.406888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.407109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.407148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.407269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.407300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.407424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.407456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.407719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.407750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.407869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.407900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.408090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.408123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.408253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.408284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.408466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.408498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.408737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.408768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.409030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.409063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.851 qpair failed and we were unable to recover it. 00:27:22.851 [2024-11-20 16:20:23.409320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.851 [2024-11-20 16:20:23.409353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.409554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.409586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.409779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.409809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.409941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.409983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.852 [2024-11-20 16:20:23.410269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.410302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:22.852 [2024-11-20 16:20:23.410576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.410609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.852 [2024-11-20 16:20:23.410893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.410925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.852 [2024-11-20 16:20:23.411170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.411203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.852 [2024-11-20 16:20:23.411328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.411361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.411567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.411598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.411778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.411810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.411939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.411981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.412244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.412275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.412463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.412494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.412624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.412654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.412808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.412857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.413085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.413120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.413261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.413294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.413530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.413560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.413746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.413778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.413959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.413991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.414181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.414211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.414393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.414424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.414595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.414627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.414868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.414899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.415145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.415178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.415426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.415457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.415589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.415619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.415879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.415911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.416068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.416113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.416384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.416419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.416565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.852 [2024-11-20 16:20:23.416596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.852 qpair failed and we were unable to recover it. 00:27:22.852 [2024-11-20 16:20:23.416725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.416756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.416963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.416996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.417250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.417282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.417469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.417500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.417685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.417716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.417935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.418136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.418169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.418286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.418317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.418525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.418558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.418738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.418769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.418898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.418935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.419124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.419156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.419349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.419380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.419489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.419520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.419705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.419737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.419924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.419963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.420243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.420275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.420401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.420433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.420542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.420574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.420817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.420848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.421001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.421122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.421153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.421393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.421424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.421548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.421589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.421841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.421874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.422096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.422129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.422253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.422286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.422419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.422451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.422563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.422594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.422778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.422810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.422938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.422979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.423168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.423200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.423407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.423437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.423574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.423605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.423793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.423825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.423942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.423983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.424175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.424206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.424393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.853 [2024-11-20 16:20:23.424424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.853 qpair failed and we were unable to recover it. 00:27:22.853 [2024-11-20 16:20:23.424612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.424644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.424756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.424788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.424919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.424965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.425145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.425178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.425292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.425323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.425495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.425526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.425694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.425725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.425907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.425938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.426128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.426161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.426364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.426396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.426639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.426671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.426851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.426882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.427017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.427060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.427333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.427366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.427486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.427518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.427635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.427666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.427785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.427817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.428049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.428209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.428376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.428526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.428680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.428818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.428993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.429027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.429205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.429236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.429419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.429459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.429589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.429622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.429793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.429826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.430021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.430055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.430169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.430200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.430382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.430413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.430527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.430559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.430738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.430770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.430965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.430997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.431191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.431225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.431403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.431435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.431620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.431652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.431823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.854 [2024-11-20 16:20:23.431855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.854 qpair failed and we were unable to recover it. 00:27:22.854 [2024-11-20 16:20:23.431979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.432012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.432236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.432268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.432459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.432490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.432679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.432712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.432894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.432927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.433115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.433149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.433336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.433368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.433558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.433589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.433705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.433738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.433910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.433943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.434091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.434123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.434249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.434281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.434407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.434438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.434554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.434587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.434717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.434757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.434943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.434987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.435184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.435214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.435339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.435371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.435556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.435587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.435764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.435796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.435924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.435963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.436110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.436259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.436415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.436565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.436716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.436862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.436974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.437015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.437143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.437174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.437357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.437390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.437516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.437548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.437672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.437702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.437820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.437852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.437981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.438013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.438119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.438149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.438266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.438297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.438410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.438441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.855 [2024-11-20 16:20:23.438545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.855 [2024-11-20 16:20:23.438577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.855 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.438698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.438729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.438844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.438875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.438998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.439035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.439243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.439280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.439386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.439419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.439600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.439632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.439745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.439777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.439878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.439910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.440089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.440123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.440239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.440271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.440390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.440421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.440659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.440692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.440806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.440839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.441017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.441051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.441173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.441205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.441365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.441572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.441612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.441724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.441756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.441876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.441909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.442056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.442088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.442201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.442233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.442369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.442401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.442524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.442556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.442681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.442713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.442907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.442938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.443071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.443103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.443210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.443240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.443345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.443377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.443552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.443583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.443696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.443728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.443856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.443888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.444079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.444112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.444306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.444337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.444478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.444509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.444625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.444655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.444846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.444879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.444984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.445018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.856 [2024-11-20 16:20:23.445125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.856 [2024-11-20 16:20:23.445156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.856 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.445347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.445379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.445495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.445526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.445658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.445690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.445800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.445832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.445956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.445991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.857 [2024-11-20 16:20:23.446179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.446212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.446322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.446353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.857 [2024-11-20 16:20:23.446528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.446560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.446693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.446725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.857 [2024-11-20 16:20:23.446975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.447011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.857 [2024-11-20 16:20:23.447122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.447155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.447342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.447374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.447552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.447582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.447698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.447729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.447906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.447938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.448067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.448098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.448214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.448356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.448387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.448559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.448589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.448723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.448753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.448871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.448902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.449044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.449077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.449188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.449219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.449393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.449426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.449602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.449633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.449736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.449766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.449934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.449979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.450089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.450119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.450243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.450274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.450385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.450416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.450531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.450567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.857 qpair failed and we were unable to recover it. 00:27:22.857 [2024-11-20 16:20:23.450741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.857 [2024-11-20 16:20:23.450772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.450944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.450984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.451090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.451122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.451332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.451363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.451598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.451629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.451748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.451779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.451974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.452186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.452335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.452474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.452612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.452760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.452921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.452963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.453080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.453112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.453226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.453257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.453368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.453400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.453507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.453539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.453708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.453740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.453856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.453888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.454008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.454041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.454206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.454238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.454427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.454459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.454631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.454664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.454781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.454811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.454936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.455095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.455126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.455301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.455338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.455593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.455624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.455733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.455764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.455962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.456219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.456251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.456421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.456451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.456562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.456592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.456770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.456995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.457028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.457143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.457174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.457292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.457323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.457429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.457459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.858 qpair failed and we were unable to recover it. 00:27:22.858 [2024-11-20 16:20:23.457569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.858 [2024-11-20 16:20:23.457602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.457797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.457828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.458021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.458054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.458269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.458300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.458401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.458431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.458602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.458634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.458893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.458923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.459108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.459140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.459325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.459356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.459470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.459501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.459670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.459701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.459886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.460041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.460076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.460255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.460286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.460391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.460422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.460552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.460584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.460699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.460730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.460857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.460887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.461004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.461037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.461226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.461257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.461425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.461456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.461653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.461685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.461929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.461972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.462096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.462127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.462299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.462330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.462439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.462471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.462690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.462888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.463134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.463181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.463303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.463335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.463522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.463554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.463684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.463716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.463826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.463858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.464032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.464067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.464185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.464217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.464327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.464480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.464512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.464633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.464665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.859 [2024-11-20 16:20:23.464909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.859 [2024-11-20 16:20:23.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.859 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.465141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.465174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.465294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.465326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.465437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.465467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.465646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.465679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.465874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.465906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.466105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.466139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.466309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.466342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.466525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.466557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.466797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.466829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.467080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.467113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.467236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.467267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.467503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.467535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.467775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.468014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.468048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.468242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.468274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.468462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.468493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd848000b90 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.468693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.468727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.468848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.468880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.469140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.469174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.469389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.469421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.469608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.469641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.469778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.469809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.469982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.470014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.470268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.470299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.470505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.470537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.470749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.470781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.470965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.470999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.471264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.471296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.471503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.471535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.471720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.471753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.472023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.472058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.472172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.472205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.472375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.472406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.472578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.472610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.472785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.472817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.473059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.473092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.473292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.473324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.860 [2024-11-20 16:20:23.473590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.860 [2024-11-20 16:20:23.473622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.860 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.473818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.473852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.473974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.474007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.474213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.474244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.474434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.474467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.474611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.474643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.474819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.474859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.475049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.475085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.475272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.475305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.475543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.475576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.475694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.475727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.475993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.476028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.476273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.476305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.476488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.476520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.476721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.476753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.476968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.477001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.477189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.477221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.477478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.477510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.477696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.477727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.477967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.478008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.478205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.478425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.478456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.478565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.478596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.478863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.478893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.479017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.479049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 Malloc0 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.479249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.479279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.479563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.479594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.861 [2024-11-20 16:20:23.479725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.479757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.861 qpair failed and we were unable to recover it. 00:27:22.861 [2024-11-20 16:20:23.480026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.861 [2024-11-20 16:20:23.480059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:22.862 [2024-11-20 16:20:23.480187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.480218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.862 [2024-11-20 16:20:23.480427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.480459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.480679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.862 [2024-11-20 16:20:23.480711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.480925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.480965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.481160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.481192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.481318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.481349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.481466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.481497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.481617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.481648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.481888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.481919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cba0 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.482126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.482168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.482290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.482322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.482521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.482552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.482654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.482683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.482938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.482983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.483225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.483256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.483442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.483473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.483657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.483688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.483821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.483852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.484087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.484120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.484400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.484432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.484690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.484720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.484904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.484935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.485200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.485231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.485344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.485373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.485618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.485649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.485830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.485861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.486116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.486147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.486352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.486383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.486512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.486542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.486642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.862 [2024-11-20 16:20:23.486713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.486743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.486934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.486973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.487100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.487129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.487299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.487331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.487458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.487488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.487660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.862 [2024-11-20 16:20:23.487691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.862 qpair failed and we were unable to recover it. 00:27:22.862 [2024-11-20 16:20:23.487889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.487920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.488195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.488226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.488413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.488443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.488565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.488594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.488767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.488799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.489039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.489071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.489253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.489285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.489465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.489501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.489685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.489715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.489897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.489928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.490061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.490092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.490212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.490243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.490409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.490439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.490553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.490582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.490850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.490882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.491062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.491094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.491279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.491310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.491522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.491554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.491679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.491710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.491962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.491994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.492161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.492190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.492398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.492428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.492664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.492696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.492936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.492978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.493244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.493275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.493512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.493544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.493664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.493695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.493902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.493933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.494152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.494183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.494371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.494402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.494588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.494621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.494803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.494833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.863 [2024-11-20 16:20:23.495036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.863 [2024-11-20 16:20:23.495069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.863 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.495249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.495280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.864 [2024-11-20 16:20:23.495458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.495490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.495677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.495708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.864 [2024-11-20 16:20:23.495978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.496012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.864 [2024-11-20 16:20:23.496271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.496303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.864 [2024-11-20 16:20:23.496537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.496569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.496735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.496766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.497028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.497230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.497260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.497447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.497477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.497645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.497677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.497812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.497844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.497964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.497996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.498175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.498206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.498383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.498414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.498706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.498738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.498858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.498890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.499031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.499063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.499166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.499197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.499384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.499416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.499525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.499556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.499696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.499727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.499905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.499936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.500147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.500179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.500294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.500324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.500430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.500461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.500593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.500624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.500748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.500778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.500967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.501001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.501208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.501240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.501422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.501453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.501638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.501669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.501909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.501941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.502067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.502098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.502269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.502300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.864 [2024-11-20 16:20:23.502417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.864 [2024-11-20 16:20:23.502448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.864 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.502685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.502717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.502831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.502862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.502976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.503010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.503247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.503285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.503550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.503581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.503841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.503873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.503975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.504008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.504130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.504162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.504377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.504634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.504666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.504839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.504870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.505118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.505151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.505280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.505311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.505499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.505530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.505757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.505787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.505927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.505966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.506205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.506238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.506354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.506385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.506554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.506585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.506772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.506803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.507041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.507153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.507185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.865 [2024-11-20 16:20:23.507442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.507473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.865 [2024-11-20 16:20:23.507647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.507679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.507856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.865 [2024-11-20 16:20:23.507888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.508009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.508040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.508140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.508172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.508413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.508444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd844000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.508688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.508737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.508966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.509000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.865 [2024-11-20 16:20:23.509248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.865 [2024-11-20 16:20:23.509279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.865 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.509459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.509489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.509762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.509793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.509982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.510015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.510257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.510288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.510398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.510430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.510624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.510656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.510838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.510869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.511002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.511035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.511242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.511273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.511452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.511484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.511673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.511712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.511871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.511903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.512089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.512122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.512295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.512327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.512447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.512479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.512683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.512714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.512900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.512932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.513115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.513148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.513368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.513400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.513658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.513689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.513876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.513908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.514162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.514195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.514437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.514469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.514593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.514625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.514750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.514783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.514962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.514996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.515218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.866 [2024-11-20 16:20:23.515251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.515374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.515406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.866 [2024-11-20 16:20:23.515596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.515632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.515890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.866 [2024-11-20 16:20:23.515923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 [2024-11-20 16:20:23.516108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.866 [2024-11-20 16:20:23.516141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.866 qpair failed and we were unable to recover it. 00:27:22.866 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.866 [2024-11-20 16:20:23.516310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.516344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.516490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.516750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.516783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.516901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.516933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.517065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.517099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.517221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.517253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.517450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.517482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.517689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.517821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.517853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.517980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.518014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.518218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.518250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.518450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.518483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.518653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.867 [2024-11-20 16:20:23.518685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd850000b90 with addr=10.0.0.2, port=4420 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.518892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.867 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.867 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.867 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.867 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.867 [2024-11-20 16:20:23.527381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.527538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.867 [2024-11-20 16:20:23.527585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.867 [2024-11-20 16:20:23.527608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.867 [2024-11-20 16:20:23.527630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.867 [2024-11-20 16:20:23.527693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.867 16:20:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2895213 00:27:22.867 [2024-11-20 16:20:23.537286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.537369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.867 [2024-11-20 16:20:23.537399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.867 [2024-11-20 16:20:23.537416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.867 [2024-11-20 16:20:23.537429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.867 [2024-11-20 16:20:23.537464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.547263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.547333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.867 [2024-11-20 16:20:23.547354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.867 [2024-11-20 16:20:23.547365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.867 [2024-11-20 16:20:23.547374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.867 [2024-11-20 16:20:23.547396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.557280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.557358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.867 [2024-11-20 16:20:23.557374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.867 [2024-11-20 16:20:23.557381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.867 [2024-11-20 16:20:23.557388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.867 [2024-11-20 16:20:23.557405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.567261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.567364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.867 [2024-11-20 16:20:23.567377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.867 [2024-11-20 16:20:23.567384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.867 [2024-11-20 16:20:23.567391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.867 [2024-11-20 16:20:23.567410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.577263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.577319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.867 [2024-11-20 16:20:23.577333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.867 [2024-11-20 16:20:23.577340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.867 [2024-11-20 16:20:23.577345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.867 [2024-11-20 16:20:23.577361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.867 qpair failed and we were unable to recover it. 00:27:22.867 [2024-11-20 16:20:23.587293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.867 [2024-11-20 16:20:23.587350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.587364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.587371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.587377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.587393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.597343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.597448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.597462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.597469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.597475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.597492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.607348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.607404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.607417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.607424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.607430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.607445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.617417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.617504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.617518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.617525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.617531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.617546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.627408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.627459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.627473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.627480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.627486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.627502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.637424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.637481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.637495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.637501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.637507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.637523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.647461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.647519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.647533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.647540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.647546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.647562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:22.868 [2024-11-20 16:20:23.657469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.868 [2024-11-20 16:20:23.657528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.868 [2024-11-20 16:20:23.657546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.868 [2024-11-20 16:20:23.657553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.868 [2024-11-20 16:20:23.657561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:22.868 [2024-11-20 16:20:23.657580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.868 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.667585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.667687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.667703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.667709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.667716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.667732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.677545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.677602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.677616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.677623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.677629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.677644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.687567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.687621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.687635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.687642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.687648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.687663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.697593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.697657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.697671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.697678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.697683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.697702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.707627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.707683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.707697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.707704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.707710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.707725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.717654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.717716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.717730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.717737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.717743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.717758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.727675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.727734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.727748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.727755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.727762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.727777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.737700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.737754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.737768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.737775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.737781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.737796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.747733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.127 [2024-11-20 16:20:23.747786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.127 [2024-11-20 16:20:23.747800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.127 [2024-11-20 16:20:23.747806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.127 [2024-11-20 16:20:23.747812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.127 [2024-11-20 16:20:23.747827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.127 qpair failed and we were unable to recover it. 00:27:23.127 [2024-11-20 16:20:23.757774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.757880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.757893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.757900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.757906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.757922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.767790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.767845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.767859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.767865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.767871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.767886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.777734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.777792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.777806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.777812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.777818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.777834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.787884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.787939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.787961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.787968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.787974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.787990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.797876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.797933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.797951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.797958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.797965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.797980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.807901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.807959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.807973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.807980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.807986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.808001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.817919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.817976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.817989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.817996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.818002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.818017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.827985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.828044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.828058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.828065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.828075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.828091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.837919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.837984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.837998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.838005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.838011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.838026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.128 qpair failed and we were unable to recover it. 00:27:23.128 [2024-11-20 16:20:23.848017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.128 [2024-11-20 16:20:23.848074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.128 [2024-11-20 16:20:23.848087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.128 [2024-11-20 16:20:23.848094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.128 [2024-11-20 16:20:23.848100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.128 [2024-11-20 16:20:23.848115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.858035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.858091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.858105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.858111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.858117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.858132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.868065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.868120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.868133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.868140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.868146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.868162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.878119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.878176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.878190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.878197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.878203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.878218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.888138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.888234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.888248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.888254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.888260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.888275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.898295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.898347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.898361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.898367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.898373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.898388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.908179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.908237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.908251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.908258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.908264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.908279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.918218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.918277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.918295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.918302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.918307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.918324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.928270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.928326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.928340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.928347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.928353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.928368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.938274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.938323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.938337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.938344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.938350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.938366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.948285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.948338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.948352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.948358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.948364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.948379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.129 [2024-11-20 16:20:23.958372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.129 [2024-11-20 16:20:23.958442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.129 [2024-11-20 16:20:23.958456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.129 [2024-11-20 16:20:23.958466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.129 [2024-11-20 16:20:23.958472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.129 [2024-11-20 16:20:23.958488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.129 qpair failed and we were unable to recover it. 00:27:23.389 [2024-11-20 16:20:23.968378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.389 [2024-11-20 16:20:23.968452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.389 [2024-11-20 16:20:23.968467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.389 [2024-11-20 16:20:23.968474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.389 [2024-11-20 16:20:23.968480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.389 [2024-11-20 16:20:23.968495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.389 qpair failed and we were unable to recover it. 00:27:23.389 [2024-11-20 16:20:23.978382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.389 [2024-11-20 16:20:23.978436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.389 [2024-11-20 16:20:23.978449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.389 [2024-11-20 16:20:23.978456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.389 [2024-11-20 16:20:23.978463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.389 [2024-11-20 16:20:23.978478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.389 qpair failed and we were unable to recover it. 00:27:23.389 [2024-11-20 16:20:23.988457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.389 [2024-11-20 16:20:23.988552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.389 [2024-11-20 16:20:23.988565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.389 [2024-11-20 16:20:23.988571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.389 [2024-11-20 16:20:23.988577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.389 [2024-11-20 16:20:23.988592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.389 qpair failed and we were unable to recover it. 00:27:23.389 [2024-11-20 16:20:23.998443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.389 [2024-11-20 16:20:23.998502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.389 [2024-11-20 16:20:23.998515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.389 [2024-11-20 16:20:23.998522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.389 [2024-11-20 16:20:23.998528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.389 [2024-11-20 16:20:23.998543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.389 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.008470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.008526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.008540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.008547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.008553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.008568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.018496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.018549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.018562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.018569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.018575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.018591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.028521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.028576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.028590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.028597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.028603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.028619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.038584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.038652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.038666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.038673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.038679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.038694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.048580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.048639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.048653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.048659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.048665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.048680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.058531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.058586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.058600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.058607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.058613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.058629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.068636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.068686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.068699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.068706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.068712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.068727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.078687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.078743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.078757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.078763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.078769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.078785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.088722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.088787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.088800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.088810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.088817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.088832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.098776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.098830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.098843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.098849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.098855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.098870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.108736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.108791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.108805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.108811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.108818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.108833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.118834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.118940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.118957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.118964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.118970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.118985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.128808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.390 [2024-11-20 16:20:24.128879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.390 [2024-11-20 16:20:24.128892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.390 [2024-11-20 16:20:24.128899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.390 [2024-11-20 16:20:24.128904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.390 [2024-11-20 16:20:24.128923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.390 qpair failed and we were unable to recover it. 00:27:23.390 [2024-11-20 16:20:24.138823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.138876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.138891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.138898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.138904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.138919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.148841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.148919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.148933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.148940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.148946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.148971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.158886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.158940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.158958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.158965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.158970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.158986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.168919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.168986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.169000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.169006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.169012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.169027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.178976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.179033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.179047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.179054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.179059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.179074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.188961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.189015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.189030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.189036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.189043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.189058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.198998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.199052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.199066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.199072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.199078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.199093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.209025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.209081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.209095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.209102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.209108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.209123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.391 [2024-11-20 16:20:24.219075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.391 [2024-11-20 16:20:24.219137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.391 [2024-11-20 16:20:24.219156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.391 [2024-11-20 16:20:24.219163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.391 [2024-11-20 16:20:24.219169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.391 [2024-11-20 16:20:24.219184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.391 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.229099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.229162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.229177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.229184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.229190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.229205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.239124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.239183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.239197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.239204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.239210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.239226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.249117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.249197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.249211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.249218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.249223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.249239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.259164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.259221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.259235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.259241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.259251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.259267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.269186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.269289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.269303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.269310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.269316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.269331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.279192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.279250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.279264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.279271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.279277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.279292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.289240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.289297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.289311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.289317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.289323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.289339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.299257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.299312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.299326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.299333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.299339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.299354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.309277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.309335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.309349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.309356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.309361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.309377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.319370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.319433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.319448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.319454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.651 [2024-11-20 16:20:24.319461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.651 [2024-11-20 16:20:24.319476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.651 qpair failed and we were unable to recover it. 00:27:23.651 [2024-11-20 16:20:24.329416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.651 [2024-11-20 16:20:24.329473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.651 [2024-11-20 16:20:24.329487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.651 [2024-11-20 16:20:24.329493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.329499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.329515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.339428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.339488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.339502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.339509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.339515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.339531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.349466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.349524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.349541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.349548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.349554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.349570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.359478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.359537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.359552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.359559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.359565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.359580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.369519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.369573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.369586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.369593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.369599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.369614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.379510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.379565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.379579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.379586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.379592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.379608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.389596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.389652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.389666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.389673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.389682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.389698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.399595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.399650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.399665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.399672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.399678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.399693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.409597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.409664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.409678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.409684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.409690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.409705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.419645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.419700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.419713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.419720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.419726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.419742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.429583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.429634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.429647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.429654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.429660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.429676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.439718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.439774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.439788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.439795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.439801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.439816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.449635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.449690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.449703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.652 [2024-11-20 16:20:24.449710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.652 [2024-11-20 16:20:24.449716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.652 [2024-11-20 16:20:24.449731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.652 qpair failed and we were unable to recover it. 00:27:23.652 [2024-11-20 16:20:24.459728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.652 [2024-11-20 16:20:24.459783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.652 [2024-11-20 16:20:24.459797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.653 [2024-11-20 16:20:24.459804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.653 [2024-11-20 16:20:24.459810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.653 [2024-11-20 16:20:24.459825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.653 qpair failed and we were unable to recover it. 00:27:23.653 [2024-11-20 16:20:24.469750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.653 [2024-11-20 16:20:24.469801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.653 [2024-11-20 16:20:24.469815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.653 [2024-11-20 16:20:24.469821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.653 [2024-11-20 16:20:24.469827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.653 [2024-11-20 16:20:24.469843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.653 qpair failed and we were unable to recover it. 00:27:23.653 [2024-11-20 16:20:24.479802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.653 [2024-11-20 16:20:24.479858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.653 [2024-11-20 16:20:24.479878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.653 [2024-11-20 16:20:24.479887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.653 [2024-11-20 16:20:24.479892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.653 [2024-11-20 16:20:24.479908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.653 qpair failed and we were unable to recover it. 00:27:23.912 [2024-11-20 16:20:24.489841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.912 [2024-11-20 16:20:24.489937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.912 [2024-11-20 16:20:24.489957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.912 [2024-11-20 16:20:24.489965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.912 [2024-11-20 16:20:24.489971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.912 [2024-11-20 16:20:24.489986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.912 qpair failed and we were unable to recover it. 00:27:23.912 [2024-11-20 16:20:24.499865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.912 [2024-11-20 16:20:24.499918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.912 [2024-11-20 16:20:24.499933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.499939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.499945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.499965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.509819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.509873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.509887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.509894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.509900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.509915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.519878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.519966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.519981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.519991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.519996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.520012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.529939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.529998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.530012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.530018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.530024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.530040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.539956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.540012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.540026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.540033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.540040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.540056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.549993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.550053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.550067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.550074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.550080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.550096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.560026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.560084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.560099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.560106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.560113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.560128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.569981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.570039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.570053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.570061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.570067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.570082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.580104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.580158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.580172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.580179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.580185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.580201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.590102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.590156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.590168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.590175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.590181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.590196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.600135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.600193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.600206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.913 [2024-11-20 16:20:24.600213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.913 [2024-11-20 16:20:24.600219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.913 [2024-11-20 16:20:24.600234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.913 qpair failed and we were unable to recover it. 00:27:23.913 [2024-11-20 16:20:24.610177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.913 [2024-11-20 16:20:24.610243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.913 [2024-11-20 16:20:24.610257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.610263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.610269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.610284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.620141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.620202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.620216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.620223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.620229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.620244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.630226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.630284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.630297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.630304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.630310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.630325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.640257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.640313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.640327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.640333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.640339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.640354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.650229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.650285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.650299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.650309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.650315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.650331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.660302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.660356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.660369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.660376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.660382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.660398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.670378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.670463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.670477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.670484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.670490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.670504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.680300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.680359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.680372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.680379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.680385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.680400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.690370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.690432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.690445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.690452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.690458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.690476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.700417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.700470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.700484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.700490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.700496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.700512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.710492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.914 [2024-11-20 16:20:24.710552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.914 [2024-11-20 16:20:24.710565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.914 [2024-11-20 16:20:24.710572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.914 [2024-11-20 16:20:24.710577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.914 [2024-11-20 16:20:24.710593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-11-20 16:20:24.720489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.915 [2024-11-20 16:20:24.720548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.915 [2024-11-20 16:20:24.720561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.915 [2024-11-20 16:20:24.720569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.915 [2024-11-20 16:20:24.720575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.915 [2024-11-20 16:20:24.720590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-11-20 16:20:24.730520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.915 [2024-11-20 16:20:24.730619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.915 [2024-11-20 16:20:24.730633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.915 [2024-11-20 16:20:24.730639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.915 [2024-11-20 16:20:24.730645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.915 [2024-11-20 16:20:24.730660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-11-20 16:20:24.740526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.915 [2024-11-20 16:20:24.740583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.915 [2024-11-20 16:20:24.740597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.915 [2024-11-20 16:20:24.740604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.915 [2024-11-20 16:20:24.740609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:23.915 [2024-11-20 16:20:24.740624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.915 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.750617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.175 [2024-11-20 16:20:24.750724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.175 [2024-11-20 16:20:24.750740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.175 [2024-11-20 16:20:24.750748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.175 [2024-11-20 16:20:24.750754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.175 [2024-11-20 16:20:24.750771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.175 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.760611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.175 [2024-11-20 16:20:24.760670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.175 [2024-11-20 16:20:24.760685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.175 [2024-11-20 16:20:24.760692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.175 [2024-11-20 16:20:24.760698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.175 [2024-11-20 16:20:24.760714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.175 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.770619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.175 [2024-11-20 16:20:24.770675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.175 [2024-11-20 16:20:24.770689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.175 [2024-11-20 16:20:24.770696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.175 [2024-11-20 16:20:24.770702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.175 [2024-11-20 16:20:24.770717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.175 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.780633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.175 [2024-11-20 16:20:24.780684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.175 [2024-11-20 16:20:24.780701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.175 [2024-11-20 16:20:24.780708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.175 [2024-11-20 16:20:24.780714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.175 [2024-11-20 16:20:24.780729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.175 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.790667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.175 [2024-11-20 16:20:24.790714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.175 [2024-11-20 16:20:24.790728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.175 [2024-11-20 16:20:24.790735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.175 [2024-11-20 16:20:24.790741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.175 [2024-11-20 16:20:24.790756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.175 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.800705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.175 [2024-11-20 16:20:24.800775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.175 [2024-11-20 16:20:24.800789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.175 [2024-11-20 16:20:24.800796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.175 [2024-11-20 16:20:24.800801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.175 [2024-11-20 16:20:24.800817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.175 qpair failed and we were unable to recover it. 00:27:24.175 [2024-11-20 16:20:24.810732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.810786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.810800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.810806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.810812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.810827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.820752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.820810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.820824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.820831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.820841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.820856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.830792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.830841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.830855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.830861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.830867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.830883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.840842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.840897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.840910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.840917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.840922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.840938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.850861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.850915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.850929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.850936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.850942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.850961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.860859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.860913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.860926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.860933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.860939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.860958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.870877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.870927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.870940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.870950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.870957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.870974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.880998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.881056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.881069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.881075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.881081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.881096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.890956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.891011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.891024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.891031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.891037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.891052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.900981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.901036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.901049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.901056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.901062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.901077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.911009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.911064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.911081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.911088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.911094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.911109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.921092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.921147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.921161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.921168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.921174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.921190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.931060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.176 [2024-11-20 16:20:24.931114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.176 [2024-11-20 16:20:24.931128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.176 [2024-11-20 16:20:24.931135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.176 [2024-11-20 16:20:24.931141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.176 [2024-11-20 16:20:24.931157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.176 qpair failed and we were unable to recover it. 00:27:24.176 [2024-11-20 16:20:24.941061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:24.941113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:24.941127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:24.941133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:24.941140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:24.941155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.177 [2024-11-20 16:20:24.951115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:24.951168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:24.951182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:24.951188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:24.951198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:24.951213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.177 [2024-11-20 16:20:24.961209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:24.961269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:24.961283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:24.961290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:24.961295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:24.961310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.177 [2024-11-20 16:20:24.971197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:24.971262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:24.971276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:24.971283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:24.971289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:24.971304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.177 [2024-11-20 16:20:24.981208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:24.981270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:24.981284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:24.981290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:24.981296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:24.981312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.177 [2024-11-20 16:20:24.991226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:24.991277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:24.991290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:24.991297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:24.991302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:24.991318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.177 [2024-11-20 16:20:25.001269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.177 [2024-11-20 16:20:25.001373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.177 [2024-11-20 16:20:25.001387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.177 [2024-11-20 16:20:25.001393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.177 [2024-11-20 16:20:25.001400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.177 [2024-11-20 16:20:25.001415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.177 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.011301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.011360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.011375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.011382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.011387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.011404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.021330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.021384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.021399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.021405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.021411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.021427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.031349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.031407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.031421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.031427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.031433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.031448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.041372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.041448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.041466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.041472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.041478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.041493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.051406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.051460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.051474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.051481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.051487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.051502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.061409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.061463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.061477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.061484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.061490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.061505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.071445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.071505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.071518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.071525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.071531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.071547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.081481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.081586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.081600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.081610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.081616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.081631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.091521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.091585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.091600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.091607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.091613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.437 [2024-11-20 16:20:25.091628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.437 qpair failed and we were unable to recover it. 00:27:24.437 [2024-11-20 16:20:25.101604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.437 [2024-11-20 16:20:25.101662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.437 [2024-11-20 16:20:25.101676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.437 [2024-11-20 16:20:25.101683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.437 [2024-11-20 16:20:25.101689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.101704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.111600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.111656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.111670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.111677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.111683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.111698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.121625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.121732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.121745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.121752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.121759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.121780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.131624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.131673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.131686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.131693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.131699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.131714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.141659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.141767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.141783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.141789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.141796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.141811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.151607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.151701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.151715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.151721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.151727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.151743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.161713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.161769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.161783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.161790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.161796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.161810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.171775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.171836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.171850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.171857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.171863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.171878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.181765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.181818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.181831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.181838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.181844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.181859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.191787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.191866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.191880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.191887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.191893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.191909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.201830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.201885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.201899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.201906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.201912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.201928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.211857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.211914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.211927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.211937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.211943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.211963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.221875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.221927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.221941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.221951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.438 [2024-11-20 16:20:25.221957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.438 [2024-11-20 16:20:25.221972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.438 qpair failed and we were unable to recover it. 00:27:24.438 [2024-11-20 16:20:25.231910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.438 [2024-11-20 16:20:25.231965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.438 [2024-11-20 16:20:25.231978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.438 [2024-11-20 16:20:25.231985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.439 [2024-11-20 16:20:25.231991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.439 [2024-11-20 16:20:25.232006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.439 qpair failed and we were unable to recover it. 00:27:24.439 [2024-11-20 16:20:25.241940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.439 [2024-11-20 16:20:25.242007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.439 [2024-11-20 16:20:25.242020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.439 [2024-11-20 16:20:25.242027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.439 [2024-11-20 16:20:25.242033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.439 [2024-11-20 16:20:25.242048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.439 qpair failed and we were unable to recover it. 00:27:24.439 [2024-11-20 16:20:25.251969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.439 [2024-11-20 16:20:25.252024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.439 [2024-11-20 16:20:25.252037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.439 [2024-11-20 16:20:25.252044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.439 [2024-11-20 16:20:25.252050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.439 [2024-11-20 16:20:25.252069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.439 qpair failed and we were unable to recover it. 00:27:24.439 [2024-11-20 16:20:25.262002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.439 [2024-11-20 16:20:25.262057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.439 [2024-11-20 16:20:25.262070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.439 [2024-11-20 16:20:25.262077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.439 [2024-11-20 16:20:25.262083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.439 [2024-11-20 16:20:25.262098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.439 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.272062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.699 [2024-11-20 16:20:25.272124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.699 [2024-11-20 16:20:25.272139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.699 [2024-11-20 16:20:25.272146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.699 [2024-11-20 16:20:25.272152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.699 [2024-11-20 16:20:25.272169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.699 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.282101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.699 [2024-11-20 16:20:25.282173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.699 [2024-11-20 16:20:25.282210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.699 [2024-11-20 16:20:25.282218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.699 [2024-11-20 16:20:25.282226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.699 [2024-11-20 16:20:25.282250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.699 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.292039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.699 [2024-11-20 16:20:25.292099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.699 [2024-11-20 16:20:25.292113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.699 [2024-11-20 16:20:25.292120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.699 [2024-11-20 16:20:25.292126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.699 [2024-11-20 16:20:25.292143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.699 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.302133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.699 [2024-11-20 16:20:25.302183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.699 [2024-11-20 16:20:25.302197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.699 [2024-11-20 16:20:25.302204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.699 [2024-11-20 16:20:25.302210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.699 [2024-11-20 16:20:25.302225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.699 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.312162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.699 [2024-11-20 16:20:25.312216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.699 [2024-11-20 16:20:25.312230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.699 [2024-11-20 16:20:25.312236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.699 [2024-11-20 16:20:25.312242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.699 [2024-11-20 16:20:25.312258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.699 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.322199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.699 [2024-11-20 16:20:25.322260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.699 [2024-11-20 16:20:25.322273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.699 [2024-11-20 16:20:25.322280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.699 [2024-11-20 16:20:25.322286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.699 [2024-11-20 16:20:25.322302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.699 qpair failed and we were unable to recover it. 00:27:24.699 [2024-11-20 16:20:25.332214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.332271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.332286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.332293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.332300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.332315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.342232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.342286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.342303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.342310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.342316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.342331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.352212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.352266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.352280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.352286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.352292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.352308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.362227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.362282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.362296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.362303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.362309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.362324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.372317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.372366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.372380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.372387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.372393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.372408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.382308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.382369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.382382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.382389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.382398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.382414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.392375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.392441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.392455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.392462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.392468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.392483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.402406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.402461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.402475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.402481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.402487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.402502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.412442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.412494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.412508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.412515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.412521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.412536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.422465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.422518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.422531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.422537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.422543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.422558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.432501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.432556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.432570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.432577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.432583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.432598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.442526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.442583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.442597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.442604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.442610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.442624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.452556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.452606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.700 [2024-11-20 16:20:25.452620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.700 [2024-11-20 16:20:25.452626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.700 [2024-11-20 16:20:25.452632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.700 [2024-11-20 16:20:25.452648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.700 qpair failed and we were unable to recover it. 00:27:24.700 [2024-11-20 16:20:25.462580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.700 [2024-11-20 16:20:25.462677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.462691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.462697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.462703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.462718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.701 [2024-11-20 16:20:25.472622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.701 [2024-11-20 16:20:25.472673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.472690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.472697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.472702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.472718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.701 [2024-11-20 16:20:25.482640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.701 [2024-11-20 16:20:25.482698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.482712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.482718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.482724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.482740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.701 [2024-11-20 16:20:25.492604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.701 [2024-11-20 16:20:25.492657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.492670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.492677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.492683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.492698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.701 [2024-11-20 16:20:25.502707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.701 [2024-11-20 16:20:25.502757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.502770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.502777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.502783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.502798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.701 [2024-11-20 16:20:25.512767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.701 [2024-11-20 16:20:25.512828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.512842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.512849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.512858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.512874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.701 [2024-11-20 16:20:25.522761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.701 [2024-11-20 16:20:25.522820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.701 [2024-11-20 16:20:25.522834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.701 [2024-11-20 16:20:25.522841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.701 [2024-11-20 16:20:25.522847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.701 [2024-11-20 16:20:25.522862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.701 qpair failed and we were unable to recover it. 00:27:24.962 [2024-11-20 16:20:25.532815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.962 [2024-11-20 16:20:25.532922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.962 [2024-11-20 16:20:25.532936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.962 [2024-11-20 16:20:25.532944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.962 [2024-11-20 16:20:25.532955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.962 [2024-11-20 16:20:25.532971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-11-20 16:20:25.542812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.962 [2024-11-20 16:20:25.542885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.962 [2024-11-20 16:20:25.542900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.962 [2024-11-20 16:20:25.542907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.962 [2024-11-20 16:20:25.542914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.962 [2024-11-20 16:20:25.542929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-11-20 16:20:25.552860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.962 [2024-11-20 16:20:25.552916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.962 [2024-11-20 16:20:25.552930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.962 [2024-11-20 16:20:25.552937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.552943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.552962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.562914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.562978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.562992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.562999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.563005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.563020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.572925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.572983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.572997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.573004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.573010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.573025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.582905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.582992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.583007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.583013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.583019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.583035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.592964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.593020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.593034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.593040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.593046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.593062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.602919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.602982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.602999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.603006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.603012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.603028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.613044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.613129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.613143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.613150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.613156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.613172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.623056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.623109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.623123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.623130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.623137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.623152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.633073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.633130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.633144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.633151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.633157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.633172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.643139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.643236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.643249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.643259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.643265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.643280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.653128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.653184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.653198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.653205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.963 [2024-11-20 16:20:25.653211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.963 [2024-11-20 16:20:25.653226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.963 qpair failed and we were unable to recover it. 00:27:24.963 [2024-11-20 16:20:25.663170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.963 [2024-11-20 16:20:25.663229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.963 [2024-11-20 16:20:25.663243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.963 [2024-11-20 16:20:25.663250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.663255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.663270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.673185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.673234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.673247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.673254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.673259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.673275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.683223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.683276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.683290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.683297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.683302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.683322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.693266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.693324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.693338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.693344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.693350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.693366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.703288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.703357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.703370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.703377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.703383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.703398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.713302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.713354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.713368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.713374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.713380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.713396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.723269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.723347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.723360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.723367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.723373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.723388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.733380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.733444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.733458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.733465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.733471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.733487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.743383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.743439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.743452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.743459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.743465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.743480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.753394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.753449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.753463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.753470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.964 [2024-11-20 16:20:25.753476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.964 [2024-11-20 16:20:25.753491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.964 qpair failed and we were unable to recover it. 00:27:24.964 [2024-11-20 16:20:25.763374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.964 [2024-11-20 16:20:25.763435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.964 [2024-11-20 16:20:25.763449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.964 [2024-11-20 16:20:25.763456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.965 [2024-11-20 16:20:25.763462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.965 [2024-11-20 16:20:25.763478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.965 qpair failed and we were unable to recover it. 00:27:24.965 [2024-11-20 16:20:25.773502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.965 [2024-11-20 16:20:25.773560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.965 [2024-11-20 16:20:25.773574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.965 [2024-11-20 16:20:25.773584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.965 [2024-11-20 16:20:25.773590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.965 [2024-11-20 16:20:25.773605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.965 qpair failed and we were unable to recover it. 00:27:24.965 [2024-11-20 16:20:25.783446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.965 [2024-11-20 16:20:25.783509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.965 [2024-11-20 16:20:25.783523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.965 [2024-11-20 16:20:25.783529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.965 [2024-11-20 16:20:25.783535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.965 [2024-11-20 16:20:25.783551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.965 qpair failed and we were unable to recover it. 00:27:24.965 [2024-11-20 16:20:25.793547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.965 [2024-11-20 16:20:25.793604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.965 [2024-11-20 16:20:25.793619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.965 [2024-11-20 16:20:25.793625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.965 [2024-11-20 16:20:25.793632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:24.965 [2024-11-20 16:20:25.793647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.965 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.803550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.803615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.803630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.803637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.803643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.803659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.813555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.813650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.813663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.813670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.813676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.813695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.823551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.823605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.823618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.823625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.823631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.823646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.833649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.833706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.833720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.833726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.833732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.833748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.843721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.843793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.843807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.843814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.843820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.843835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.853718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.853776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.853790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.853796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.853803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.853818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.863708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.863760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.863774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.863780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.863786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.863801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.873695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.873762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.873776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.873783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.873789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.873805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.225 qpair failed and we were unable to recover it. 00:27:25.225 [2024-11-20 16:20:25.883783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.225 [2024-11-20 16:20:25.883842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.225 [2024-11-20 16:20:25.883856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.225 [2024-11-20 16:20:25.883862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.225 [2024-11-20 16:20:25.883868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.225 [2024-11-20 16:20:25.883884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.893837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.893887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.893901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.893907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.893913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.893929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.903840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.903892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.903909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.903915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.903921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.903937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.913808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.913873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.913888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.913894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.913900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.913916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.923849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.923904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.923918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.923924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.923930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.923946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.933939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.934023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.934037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.934044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.934050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.934066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.943921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.944021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.944035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.944042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.944053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.944069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.953975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.954035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.954049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.954056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.954062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.954078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.964017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.964091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.964104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.964111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.964117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.964133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.974066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.974145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.974159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.974166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.974171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.974187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.984081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.984143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.984157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.984164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.984170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.984185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:25.994043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:25.994126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:25.994140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:25.994146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:25.994152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:25.994168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:26.004103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:26.004175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:26.004189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:26.004196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:26.004202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:26.004217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.226 [2024-11-20 16:20:26.014170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.226 [2024-11-20 16:20:26.014244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.226 [2024-11-20 16:20:26.014258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.226 [2024-11-20 16:20:26.014265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.226 [2024-11-20 16:20:26.014271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.226 [2024-11-20 16:20:26.014287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.226 qpair failed and we were unable to recover it. 00:27:25.227 [2024-11-20 16:20:26.024143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.227 [2024-11-20 16:20:26.024228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.227 [2024-11-20 16:20:26.024242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.227 [2024-11-20 16:20:26.024248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.227 [2024-11-20 16:20:26.024254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.227 [2024-11-20 16:20:26.024269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.227 qpair failed and we were unable to recover it. 00:27:25.227 [2024-11-20 16:20:26.034238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.227 [2024-11-20 16:20:26.034294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.227 [2024-11-20 16:20:26.034310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.227 [2024-11-20 16:20:26.034317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.227 [2024-11-20 16:20:26.034323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.227 [2024-11-20 16:20:26.034338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.227 qpair failed and we were unable to recover it. 00:27:25.227 [2024-11-20 16:20:26.044256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.227 [2024-11-20 16:20:26.044311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.227 [2024-11-20 16:20:26.044324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.227 [2024-11-20 16:20:26.044330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.227 [2024-11-20 16:20:26.044336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.227 [2024-11-20 16:20:26.044351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.227 qpair failed and we were unable to recover it. 00:27:25.227 [2024-11-20 16:20:26.054233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.227 [2024-11-20 16:20:26.054291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.227 [2024-11-20 16:20:26.054306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.227 [2024-11-20 16:20:26.054313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.227 [2024-11-20 16:20:26.054319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.227 [2024-11-20 16:20:26.054335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.227 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.064257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.064318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.064333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.064340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.064346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.064362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.074268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.074324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.074338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.074345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.074354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.074370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.084312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.084371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.084385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.084392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.084398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.084413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.094382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.094438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.094451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.094457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.094463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.094478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.104360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.104415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.104429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.104436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.104442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.104457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.114394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.114474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.114488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.114495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.114500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.114515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.124500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.124576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.487 [2024-11-20 16:20:26.124590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.487 [2024-11-20 16:20:26.124596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.487 [2024-11-20 16:20:26.124602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.487 [2024-11-20 16:20:26.124617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.487 qpair failed and we were unable to recover it. 00:27:25.487 [2024-11-20 16:20:26.134440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.487 [2024-11-20 16:20:26.134502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.134515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.134522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.134528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.134543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.144550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.144606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.144621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.144627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.144634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.144650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.154576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.154631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.154645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.154652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.154658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.154674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.164528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.164587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.164605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.164611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.164617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.164632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.174592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.174654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.174668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.174674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.174680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.174695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.184650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.184704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.184718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.184724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.184730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.184745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.194666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.194721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.194736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.194743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.194749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.194764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.204707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.204770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.204784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.204794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.204800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.204816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.214730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.214789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.214803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.214810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.214815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.214831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.224748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.224801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.224814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.224820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.224827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.224841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.234774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.234824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.234838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.234844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.234851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.234866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.244752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.244804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.244817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.244824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.244830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.244848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.254840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.254896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.488 [2024-11-20 16:20:26.254909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.488 [2024-11-20 16:20:26.254916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.488 [2024-11-20 16:20:26.254922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.488 [2024-11-20 16:20:26.254937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.488 qpair failed and we were unable to recover it. 00:27:25.488 [2024-11-20 16:20:26.264877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.488 [2024-11-20 16:20:26.264968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.489 [2024-11-20 16:20:26.264982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.489 [2024-11-20 16:20:26.264989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.489 [2024-11-20 16:20:26.264994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.489 [2024-11-20 16:20:26.265010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.489 qpair failed and we were unable to recover it. 00:27:25.489 [2024-11-20 16:20:26.274895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.489 [2024-11-20 16:20:26.274986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.489 [2024-11-20 16:20:26.275000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.489 [2024-11-20 16:20:26.275007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.489 [2024-11-20 16:20:26.275013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.489 [2024-11-20 16:20:26.275028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.489 qpair failed and we were unable to recover it. 00:27:25.489 [2024-11-20 16:20:26.284959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.489 [2024-11-20 16:20:26.285013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.489 [2024-11-20 16:20:26.285026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.489 [2024-11-20 16:20:26.285033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.489 [2024-11-20 16:20:26.285039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.489 [2024-11-20 16:20:26.285054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.489 qpair failed and we were unable to recover it. 00:27:25.489 [2024-11-20 16:20:26.294969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.489 [2024-11-20 16:20:26.295031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.489 [2024-11-20 16:20:26.295044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.489 [2024-11-20 16:20:26.295051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.489 [2024-11-20 16:20:26.295057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.489 [2024-11-20 16:20:26.295073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.489 qpair failed and we were unable to recover it. 00:27:25.489 [2024-11-20 16:20:26.304922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.489 [2024-11-20 16:20:26.305010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.489 [2024-11-20 16:20:26.305024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.489 [2024-11-20 16:20:26.305031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.489 [2024-11-20 16:20:26.305037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.489 [2024-11-20 16:20:26.305052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.489 qpair failed and we were unable to recover it. 00:27:25.489 [2024-11-20 16:20:26.314945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.489 [2024-11-20 16:20:26.315007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.489 [2024-11-20 16:20:26.315021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.489 [2024-11-20 16:20:26.315028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.489 [2024-11-20 16:20:26.315033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.489 [2024-11-20 16:20:26.315049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.489 qpair failed and we were unable to recover it. 00:27:25.748 [2024-11-20 16:20:26.325190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.748 [2024-11-20 16:20:26.325255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.748 [2024-11-20 16:20:26.325271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.748 [2024-11-20 16:20:26.325278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.748 [2024-11-20 16:20:26.325283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.748 [2024-11-20 16:20:26.325299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.748 qpair failed and we were unable to recover it. 00:27:25.748 [2024-11-20 16:20:26.335143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.748 [2024-11-20 16:20:26.335204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.748 [2024-11-20 16:20:26.335218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.748 [2024-11-20 16:20:26.335229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.748 [2024-11-20 16:20:26.335235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.748 [2024-11-20 16:20:26.335251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.748 qpair failed and we were unable to recover it. 00:27:25.748 [2024-11-20 16:20:26.345120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.748 [2024-11-20 16:20:26.345170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.748 [2024-11-20 16:20:26.345183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.748 [2024-11-20 16:20:26.345190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.748 [2024-11-20 16:20:26.345196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.748 [2024-11-20 16:20:26.345211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.748 qpair failed and we were unable to recover it. 00:27:25.748 [2024-11-20 16:20:26.355121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.748 [2024-11-20 16:20:26.355174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.748 [2024-11-20 16:20:26.355188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.748 [2024-11-20 16:20:26.355195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.748 [2024-11-20 16:20:26.355200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.748 [2024-11-20 16:20:26.355216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.748 qpair failed and we were unable to recover it. 00:27:25.748 [2024-11-20 16:20:26.365188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.748 [2024-11-20 16:20:26.365246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.365259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.365266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.365272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.365287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.375134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.375191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.375204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.375211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.375216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.375235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.385221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.385269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.385283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.385289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.385295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.385310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.395191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.395243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.395257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.395263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.395269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.395284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.405295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.405350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.405363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.405370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.405376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.405391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.415276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.415328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.415341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.415348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.415353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.415369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.425322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.425377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.425392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.425399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.425405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.425421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.435374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.435429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.435442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.435449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.435455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.435470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.445334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.445392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.445406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.445413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.445419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.445434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.455433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.455487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.455501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.455508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.455514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.455529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.465475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.465565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.465582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.465589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.465595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.465610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.475470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.475519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.475532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.475539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.475545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.749 [2024-11-20 16:20:26.475561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.749 qpair failed and we were unable to recover it. 00:27:25.749 [2024-11-20 16:20:26.485442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.749 [2024-11-20 16:20:26.485507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.749 [2024-11-20 16:20:26.485521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.749 [2024-11-20 16:20:26.485527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.749 [2024-11-20 16:20:26.485533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.485548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.495534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.495589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.495603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.495609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.495615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.495631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.505614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.505663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.505677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.505683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.505693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.505708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.515599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.515696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.515709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.515716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.515722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.515737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.525567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.525622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.525636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.525642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.525648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.525663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.535666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.535723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.535736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.535743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.535748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.535763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.545670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.545721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.545735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.545742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.545748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.545764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.555715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.555774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.555788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.555795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.555800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.555816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.565808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.565867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.565880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.565887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.565893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.565908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:25.750 [2024-11-20 16:20:26.575819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.750 [2024-11-20 16:20:26.575877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.750 [2024-11-20 16:20:26.575891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.750 [2024-11-20 16:20:26.575898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.750 [2024-11-20 16:20:26.575904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:25.750 [2024-11-20 16:20:26.575920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.750 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.585826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.585890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.585905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.585911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.585917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.585933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.595877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.595931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.595952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.595960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.595966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.595981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.605887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.605951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.605965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.605972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.605978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.605993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.615909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.615969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.615982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.615989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.615995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.616010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.625924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.625977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.625991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.625997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.626003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.626019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.635952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.636003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.636017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.636024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.636035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.636051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.646000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.646058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.646072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.646079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.646085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.646099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.656004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.656086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.656100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.656106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.656112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.656127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.666040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.666306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.666322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.666329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.666335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.666351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.676073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.676150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.010 [2024-11-20 16:20:26.676163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.010 [2024-11-20 16:20:26.676171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.010 [2024-11-20 16:20:26.676176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.010 [2024-11-20 16:20:26.676192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.010 qpair failed and we were unable to recover it. 00:27:26.010 [2024-11-20 16:20:26.686127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.010 [2024-11-20 16:20:26.686183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.686196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.686203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.686210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.686228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.696168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.696236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.696250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.696257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.696263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.696278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.706161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.706213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.706228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.706235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.706241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.706256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.716113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.716164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.716177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.716184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.716190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.716206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.726278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.726335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.726352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.726359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.726364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.726380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.736227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.736287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.736301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.736308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.736314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.736329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.746321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.746372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.746386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.746392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.746399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.746414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.756344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.756401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.756414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.756421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.756427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.756442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.766367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.766426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.766440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.766450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.766455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.766471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.776357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.776416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.776430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.776437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.776443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.776458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.786391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.786462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.786476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.786483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.786489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.786504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.796418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.796473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.796487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.796494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.796500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.796515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.806448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.806501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.011 [2024-11-20 16:20:26.806515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.011 [2024-11-20 16:20:26.806521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.011 [2024-11-20 16:20:26.806527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.011 [2024-11-20 16:20:26.806544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.011 qpair failed and we were unable to recover it. 00:27:26.011 [2024-11-20 16:20:26.816534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.011 [2024-11-20 16:20:26.816592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.012 [2024-11-20 16:20:26.816606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.012 [2024-11-20 16:20:26.816613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.012 [2024-11-20 16:20:26.816619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.012 [2024-11-20 16:20:26.816634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.012 qpair failed and we were unable to recover it. 00:27:26.012 [2024-11-20 16:20:26.826442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.012 [2024-11-20 16:20:26.826535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.012 [2024-11-20 16:20:26.826549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.012 [2024-11-20 16:20:26.826556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.012 [2024-11-20 16:20:26.826562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.012 [2024-11-20 16:20:26.826577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.012 qpair failed and we were unable to recover it. 00:27:26.012 [2024-11-20 16:20:26.836538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.012 [2024-11-20 16:20:26.836613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.012 [2024-11-20 16:20:26.836627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.012 [2024-11-20 16:20:26.836634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.012 [2024-11-20 16:20:26.836640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.012 [2024-11-20 16:20:26.836656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.012 qpair failed and we were unable to recover it. 00:27:26.272 [2024-11-20 16:20:26.846603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.272 [2024-11-20 16:20:26.846665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.272 [2024-11-20 16:20:26.846679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.272 [2024-11-20 16:20:26.846686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.272 [2024-11-20 16:20:26.846692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.272 [2024-11-20 16:20:26.846707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-11-20 16:20:26.856587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.272 [2024-11-20 16:20:26.856650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.272 [2024-11-20 16:20:26.856665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.272 [2024-11-20 16:20:26.856672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.272 [2024-11-20 16:20:26.856678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.272 [2024-11-20 16:20:26.856694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-11-20 16:20:26.866631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.866715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.866729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.866735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.866741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.866756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.876643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.876695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.876708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.876715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.876721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.876736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.886711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.886769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.886782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.886789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.886795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.886810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.896712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.896768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.896781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.896792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.896797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.896812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.906741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.906811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.906825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.906831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.906838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.906853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.916761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.916813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.916827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.916834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.916840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.916855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.926851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.926907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.926921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.926928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.926934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.926953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.936828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.936911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.936925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.936931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.936937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.936961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.946780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.946868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.946883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.946890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.946896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.946911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.956894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.956963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.956977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.956984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.956990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.957006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.966901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.966986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.967000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.967007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.967012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.967028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.976938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.977025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.977039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.977046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.977052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.273 [2024-11-20 16:20:26.977067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-11-20 16:20:26.986953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.273 [2024-11-20 16:20:26.987011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.273 [2024-11-20 16:20:26.987025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.273 [2024-11-20 16:20:26.987032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.273 [2024-11-20 16:20:26.987038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:26.987053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:26.996983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:26.997036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:26.997049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:26.997056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:26.997062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:26.997077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.007020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.007077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.007091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.007098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.007104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.007119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.017044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.017099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.017112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.017119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.017125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.017140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.027066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.027118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.027135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.027142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.027147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.027163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.037064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.037137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.037150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.037157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.037163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.037178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.047139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.047196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.047209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.047216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.047222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.047237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.057161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.057213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.057226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.057233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.057238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.057253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.067185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.067239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.067252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.067259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.067268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.067283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.077217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.077267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.077280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.077287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.077293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.077307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.087254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.087309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.087322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.087329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.087334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.087349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-11-20 16:20:27.097267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.274 [2024-11-20 16:20:27.097325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.274 [2024-11-20 16:20:27.097338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.274 [2024-11-20 16:20:27.097345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.274 [2024-11-20 16:20:27.097351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.274 [2024-11-20 16:20:27.097366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.534 [2024-11-20 16:20:27.107345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.534 [2024-11-20 16:20:27.107403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.534 [2024-11-20 16:20:27.107418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.534 [2024-11-20 16:20:27.107425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.534 [2024-11-20 16:20:27.107431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.534 [2024-11-20 16:20:27.107447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.534 qpair failed and we were unable to recover it. 00:27:26.534 [2024-11-20 16:20:27.117334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.534 [2024-11-20 16:20:27.117396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.534 [2024-11-20 16:20:27.117411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.534 [2024-11-20 16:20:27.117418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.534 [2024-11-20 16:20:27.117424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.534 [2024-11-20 16:20:27.117439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.534 qpair failed and we were unable to recover it. 00:27:26.534 [2024-11-20 16:20:27.127371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.534 [2024-11-20 16:20:27.127427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.534 [2024-11-20 16:20:27.127441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.534 [2024-11-20 16:20:27.127448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.534 [2024-11-20 16:20:27.127453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.534 [2024-11-20 16:20:27.127468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.534 qpair failed and we were unable to recover it. 00:27:26.534 [2024-11-20 16:20:27.137390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.534 [2024-11-20 16:20:27.137446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.534 [2024-11-20 16:20:27.137460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.534 [2024-11-20 16:20:27.137466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.534 [2024-11-20 16:20:27.137472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.137487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.147388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.147447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.147461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.147468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.147473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.147489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.157482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.157540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.157557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.157564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.157570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.157586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.167489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.167559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.167573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.167580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.167585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.167601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.177504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.177563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.177577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.177583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.177589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.177604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.187530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.187587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.187601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.187608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.187614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.187629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.197536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.197608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.197624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.197633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.197645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.197663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.207688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.207749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.207762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.207769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.207775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.207790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.217618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.217679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.217693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.217700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.217706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.217722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.227595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.227647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.227661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.227667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.227673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.227688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.237666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.237720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.237734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.237740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.237747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.237762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.247672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.247732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.247746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.247753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.247759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.247775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.257743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.257801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.257814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.257821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.257827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.535 [2024-11-20 16:20:27.257843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.535 qpair failed and we were unable to recover it. 00:27:26.535 [2024-11-20 16:20:27.267777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.535 [2024-11-20 16:20:27.267834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.535 [2024-11-20 16:20:27.267848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.535 [2024-11-20 16:20:27.267855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.535 [2024-11-20 16:20:27.267860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.267876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.277780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.277834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.277847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.277854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.277860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.277875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.287841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.287901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.287915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.287922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.287928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.287944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.297780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.297835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.297848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.297855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.297861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.297876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.307877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.307931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.307945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.307955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.307961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.307976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.317942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.318046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.318060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.318066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.318072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.318088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.327960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.328022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.328043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.328053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.328059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.328080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.337905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.337964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.337979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.337986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.337992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.338008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.347988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.348037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.348051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.348057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.348063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.348079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.536 [2024-11-20 16:20:27.358021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.536 [2024-11-20 16:20:27.358096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.536 [2024-11-20 16:20:27.358110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.536 [2024-11-20 16:20:27.358117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.536 [2024-11-20 16:20:27.358123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.536 [2024-11-20 16:20:27.358139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.536 qpair failed and we were unable to recover it. 00:27:26.796 [2024-11-20 16:20:27.368142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.796 [2024-11-20 16:20:27.368215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.796 [2024-11-20 16:20:27.368231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.796 [2024-11-20 16:20:27.368237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.796 [2024-11-20 16:20:27.368243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.368265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.378116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.378178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.378193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.378200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.378206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.378223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.388121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.388190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.388204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.388211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.388217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.388233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.398111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.398194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.398207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.398214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.398220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.398235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.408116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.408176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.408189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.408196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.408202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.408218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.418180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.418279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.418293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.418300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.418306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.418321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.428262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.428330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.428344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.428350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.428356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.428372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.438186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.438240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.438253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.438260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.438266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.438282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.448237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.448293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.448307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.448314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.448320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.448336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.458249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.458303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.458318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.458327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.458334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.458349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.468364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.468424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.468438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.468444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.468450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.468466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.797 [2024-11-20 16:20:27.478337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.797 [2024-11-20 16:20:27.478397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.797 [2024-11-20 16:20:27.478411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.797 [2024-11-20 16:20:27.478418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.797 [2024-11-20 16:20:27.478423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.797 [2024-11-20 16:20:27.478439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.797 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.488445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.488499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.488512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.488519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.488525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.488540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.498409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.498468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.498482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.498489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.498495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.498514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.508397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.508450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.508464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.508471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.508477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.508492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.518476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.518524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.518538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.518545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.518551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.518566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.528524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.528582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.528595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.528602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.528608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.528624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.538558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.538613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.538626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.538632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.538639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.538654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.548534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.548624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.548637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.548643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.548649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.548664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.558583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.558660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.558673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.558680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.558686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.558701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.568641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.568726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.568740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.568746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.568752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.568768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.578690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.578744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.578758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.578765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.578771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.578787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.588614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.588673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.588690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.588697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.588703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.588718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.598704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.598757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.798 [2024-11-20 16:20:27.598770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.798 [2024-11-20 16:20:27.598777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.798 [2024-11-20 16:20:27.598783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.798 [2024-11-20 16:20:27.598798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.798 qpair failed and we were unable to recover it. 00:27:26.798 [2024-11-20 16:20:27.608678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.798 [2024-11-20 16:20:27.608736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.799 [2024-11-20 16:20:27.608750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.799 [2024-11-20 16:20:27.608756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.799 [2024-11-20 16:20:27.608762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.799 [2024-11-20 16:20:27.608777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.799 qpair failed and we were unable to recover it. 00:27:26.799 [2024-11-20 16:20:27.618806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.799 [2024-11-20 16:20:27.618896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.799 [2024-11-20 16:20:27.618910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.799 [2024-11-20 16:20:27.618916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.799 [2024-11-20 16:20:27.618922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.799 [2024-11-20 16:20:27.618937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.799 qpair failed and we were unable to recover it. 00:27:26.799 [2024-11-20 16:20:27.628743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.799 [2024-11-20 16:20:27.628802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.799 [2024-11-20 16:20:27.628817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.799 [2024-11-20 16:20:27.628824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.799 [2024-11-20 16:20:27.628834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:26.799 [2024-11-20 16:20:27.628850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.799 qpair failed and we were unable to recover it. 00:27:27.058 [2024-11-20 16:20:27.638856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.638912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.638927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.638934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.638940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.638960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.648881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.648941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.648964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.648971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.648977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.648992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.658900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.658978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.658992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.658999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.659004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.659020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.668923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.668977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.668991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.668997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.669003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.669018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.678956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.679011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.679025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.679032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.679038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.679053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.688996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.689052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.689066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.689073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.689079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.689094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.699010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.699063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.699077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.699083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.699089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.699104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.709040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.709093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.709106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.709113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.709119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.709134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.719002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.719054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.719071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.719078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.059 [2024-11-20 16:20:27.719084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.059 [2024-11-20 16:20:27.719100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.059 qpair failed and we were unable to recover it. 00:27:27.059 [2024-11-20 16:20:27.729145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.059 [2024-11-20 16:20:27.729202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.059 [2024-11-20 16:20:27.729215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.059 [2024-11-20 16:20:27.729222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.729228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.729243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.739175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.739241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.739255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.739262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.739267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.739283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.749101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.749157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.749171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.749178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.749184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.749199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.759176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.759233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.759247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.759253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.759262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.759279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.769216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.769272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.769286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.769293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.769299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.769314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.779240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.779296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.779310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.779317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.779323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.779338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.789275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.789365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.789379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.789386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.789392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.789407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.799293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.799347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.799361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.799368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.799374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.799390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.809254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.809322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.809336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.809342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.809348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.809363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.819357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.819406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.060 [2024-11-20 16:20:27.819420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.060 [2024-11-20 16:20:27.819426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.060 [2024-11-20 16:20:27.819432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.060 [2024-11-20 16:20:27.819447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.060 qpair failed and we were unable to recover it. 00:27:27.060 [2024-11-20 16:20:27.829350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.060 [2024-11-20 16:20:27.829439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.829452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.829459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.829464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.829480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.061 [2024-11-20 16:20:27.839386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.061 [2024-11-20 16:20:27.839439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.839453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.839460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.839465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.839480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.061 [2024-11-20 16:20:27.849437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.061 [2024-11-20 16:20:27.849498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.849513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.849519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.849525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.849540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.061 [2024-11-20 16:20:27.859467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.061 [2024-11-20 16:20:27.859521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.859535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.859541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.859547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.859561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.061 [2024-11-20 16:20:27.869422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.061 [2024-11-20 16:20:27.869481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.869495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.869502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.869508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.869522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.061 [2024-11-20 16:20:27.879516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.061 [2024-11-20 16:20:27.879568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.879581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.879588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.879594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.879609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.061 [2024-11-20 16:20:27.889594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.061 [2024-11-20 16:20:27.889670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.061 [2024-11-20 16:20:27.889685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.061 [2024-11-20 16:20:27.889694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.061 [2024-11-20 16:20:27.889700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.061 [2024-11-20 16:20:27.889717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.061 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.899597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.899651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.899666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.899673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.899679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.899695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.909674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.909733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.909748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.909755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.909760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.909776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.919640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.919704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.919718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.919725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.919731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.919746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.929690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.929750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.929764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.929771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.929776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.929795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.939704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.939755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.939770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.939777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.939782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.939798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.949782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.949846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.949860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.949867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.949873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.949889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.959750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.959805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.959819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.959825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.959831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.959847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.969823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.969889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.969904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.969910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.969916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.969932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.979819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.979876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.979890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.979896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.979902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.979918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.989845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.989901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.989915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.989922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.989928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.989943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:27.999802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:27.999891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:27.999905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:27.999912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:27.999918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:27.999933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:28.009868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:28.009919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.322 [2024-11-20 16:20:28.009933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.322 [2024-11-20 16:20:28.009939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.322 [2024-11-20 16:20:28.009945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.322 [2024-11-20 16:20:28.009965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-11-20 16:20:28.019932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.322 [2024-11-20 16:20:28.019997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.020014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.020021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.020027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.020042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.029974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.030029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.030051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.030059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.030065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.030083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.040008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.040061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.040074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.040081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.040086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.040102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.050036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.050091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.050105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.050111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.050117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.050132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.060055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.060137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.060150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.060157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.060163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.060182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.070014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.070067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.070080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.070087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.070093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.070108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.080104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.080155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.080169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.080175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.080181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.080196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.090146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.090201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.090215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.090221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.090228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.090243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.100182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.100236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.100250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.100256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.100263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.100278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.110199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.110254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.110268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.110274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.110280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.110295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.120215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.120296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.120310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.120316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.120322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.120337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.130195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.130256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.130269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.130276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.130282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.130297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.140281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.140336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.323 [2024-11-20 16:20:28.140350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.323 [2024-11-20 16:20:28.140356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.323 [2024-11-20 16:20:28.140362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.323 [2024-11-20 16:20:28.140377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-11-20 16:20:28.150355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.323 [2024-11-20 16:20:28.150459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.324 [2024-11-20 16:20:28.150480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.324 [2024-11-20 16:20:28.150487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.324 [2024-11-20 16:20:28.150493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.324 [2024-11-20 16:20:28.150509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.583 [2024-11-20 16:20:28.160284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.583 [2024-11-20 16:20:28.160345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.583 [2024-11-20 16:20:28.160359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.583 [2024-11-20 16:20:28.160367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.583 [2024-11-20 16:20:28.160372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.583 [2024-11-20 16:20:28.160389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.583 qpair failed and we were unable to recover it. 00:27:27.583 [2024-11-20 16:20:28.170393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.583 [2024-11-20 16:20:28.170456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.583 [2024-11-20 16:20:28.170470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.583 [2024-11-20 16:20:28.170477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.583 [2024-11-20 16:20:28.170483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.583 [2024-11-20 16:20:28.170499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.583 qpair failed and we were unable to recover it. 00:27:27.583 [2024-11-20 16:20:28.180400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.583 [2024-11-20 16:20:28.180458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.583 [2024-11-20 16:20:28.180471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.583 [2024-11-20 16:20:28.180478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.583 [2024-11-20 16:20:28.180484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.180499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.190419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.190471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.190485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.190491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.190501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.190516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.200487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.200547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.200560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.200567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.200573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.200588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.210486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.210541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.210555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.210562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.210568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.210583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.220495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.220552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.220566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.220572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.220578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.220593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.230527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.230583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.230596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.230602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.230608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.230623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.240556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.240607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.240621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.240628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.240634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.240650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.250600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.250659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.250673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.250680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.250686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.250701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.260624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.260678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.260692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.260699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.260704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.260720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.270649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.270704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.270718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.270724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.270730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.270746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.280689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.280744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.280760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.280767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.280773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.280788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.290745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.290845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.290859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.290865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.290872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.290887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.300739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.300796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.300810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.300817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.300824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.584 [2024-11-20 16:20:28.300839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.584 qpair failed and we were unable to recover it. 00:27:27.584 [2024-11-20 16:20:28.310693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.584 [2024-11-20 16:20:28.310742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.584 [2024-11-20 16:20:28.310756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.584 [2024-11-20 16:20:28.310762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.584 [2024-11-20 16:20:28.310768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.310784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.320783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.320839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.320853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.320863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.320869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.320885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.330814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.330872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.330885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.330891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.330898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.330912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.340845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.340899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.340913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.340919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.340925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.340940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.350885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.350952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.350967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.350974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.350982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.350998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.360883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.360938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.360957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.360964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.360970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.360986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.370932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.371014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.371028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.371035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.371041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.371056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.380956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.381008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.381021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.381028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.381033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.381049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.391001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.391063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.391077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.391083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.391089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.391105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.401000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.401056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.401069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.401076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.401082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.401097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.585 [2024-11-20 16:20:28.411044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.585 [2024-11-20 16:20:28.411100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.585 [2024-11-20 16:20:28.411114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.585 [2024-11-20 16:20:28.411120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.585 [2024-11-20 16:20:28.411126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.585 [2024-11-20 16:20:28.411142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.585 qpair failed and we were unable to recover it. 00:27:27.845 [2024-11-20 16:20:28.421053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.845 [2024-11-20 16:20:28.421112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.845 [2024-11-20 16:20:28.421126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.845 [2024-11-20 16:20:28.421133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.845 [2024-11-20 16:20:28.421139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.845 [2024-11-20 16:20:28.421155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.845 qpair failed and we were unable to recover it. 00:27:27.845 [2024-11-20 16:20:28.431115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.845 [2024-11-20 16:20:28.431173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.845 [2024-11-20 16:20:28.431187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.845 [2024-11-20 16:20:28.431194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.845 [2024-11-20 16:20:28.431200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.845 [2024-11-20 16:20:28.431216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.845 qpair failed and we were unable to recover it. 00:27:27.845 [2024-11-20 16:20:28.441103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.845 [2024-11-20 16:20:28.441158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.441173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.441179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.441185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.441201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.451170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.451227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.451241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.451251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.451257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.451272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.461203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.461273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.461286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.461293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.461299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.461314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.471202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.471256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.471270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.471277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.471283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.471299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.481158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.481216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.481229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.481236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.481242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.481257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.491265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.491320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.491334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.491341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.491347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.491365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.501291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.501349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.501363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.501370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.501376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.501391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.511331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.511381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.511395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.511401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.511407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.511423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.521346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.521398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.521411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.521417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.521424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.521439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.531325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.531379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.531392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.531399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.531404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.531419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.541414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.541472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.541485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.541492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.541498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.541512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.551436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.551511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.551525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.551531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.551537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.551552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.561408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.561464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.846 [2024-11-20 16:20:28.561477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.846 [2024-11-20 16:20:28.561484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.846 [2024-11-20 16:20:28.561490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.846 [2024-11-20 16:20:28.561506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.846 qpair failed and we were unable to recover it. 00:27:27.846 [2024-11-20 16:20:28.571498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.846 [2024-11-20 16:20:28.571559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.571573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.571579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.571585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.571600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.581521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.581584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.581600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.581607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.581612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.581629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.591544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.591598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.591612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.591618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.591624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.591640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.601571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.601623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.601637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.601643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.601649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.601665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.611584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.611662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.611677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.611683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.611691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.611707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.621559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.621615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.621628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.621635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.621641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.621660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.631630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.631696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.631709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.631715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.631721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.631737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.641612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.641671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.641685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.641692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.641697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.641713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.651745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.651821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.651834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.651841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.651847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.651862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.661743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.661795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.661808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.661815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.661821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.661836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:27.847 [2024-11-20 16:20:28.671806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.847 [2024-11-20 16:20:28.671888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.847 [2024-11-20 16:20:28.671902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.847 [2024-11-20 16:20:28.671909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.847 [2024-11-20 16:20:28.671914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:27.847 [2024-11-20 16:20:28.671930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:27.847 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.681822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.681884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.681899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.681906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.681912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.681927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.691859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.691919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.691934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.691941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.691951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.691967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.701866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.701959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.701974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.701981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.701987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.702002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.711891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.711945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.711967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.711974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.711980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.711996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.721883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.721944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.721962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.721969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.721975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.721991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.731902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.731965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.731979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.731986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.731992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.732007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.741987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.742061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.742075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.742081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.742087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.107 [2024-11-20 16:20:28.742102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.107 qpair failed and we were unable to recover it. 00:27:28.107 [2024-11-20 16:20:28.751925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.107 [2024-11-20 16:20:28.751981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.107 [2024-11-20 16:20:28.751995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.107 [2024-11-20 16:20:28.752003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.107 [2024-11-20 16:20:28.752012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.752027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.762041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.762099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.762113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.762120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.762126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.762141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.772061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.772122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.772136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.772143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.772149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.772164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.782108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.782182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.782195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.782202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.782208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.782223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.792093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.792147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.792160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.792166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.792173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.792188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.802086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.802142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.802156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.802163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.802168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.802184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.812124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.812179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.812193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.812199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.812205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.812220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.822236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.822293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.822307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.822314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.822320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.822335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.832156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.832212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.832226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.832232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.832238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.832254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.842202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.842253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.842271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.842277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.842285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.842300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.852290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.852348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.852363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.852370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.852377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.852394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.862333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.862413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.862427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.862434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.862440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.862456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.872280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.872334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.872348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.872354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.872360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.108 [2024-11-20 16:20:28.872376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.108 qpair failed and we were unable to recover it. 00:27:28.108 [2024-11-20 16:20:28.882347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.108 [2024-11-20 16:20:28.882410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.108 [2024-11-20 16:20:28.882429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.108 [2024-11-20 16:20:28.882442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.108 [2024-11-20 16:20:28.882449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.109 [2024-11-20 16:20:28.882467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.109 qpair failed and we were unable to recover it. 00:27:28.109 [2024-11-20 16:20:28.892432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.109 [2024-11-20 16:20:28.892488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.109 [2024-11-20 16:20:28.892503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.109 [2024-11-20 16:20:28.892510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.109 [2024-11-20 16:20:28.892516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.109 [2024-11-20 16:20:28.892532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.109 qpair failed and we were unable to recover it. 00:27:28.109 [2024-11-20 16:20:28.902385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.109 [2024-11-20 16:20:28.902441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.109 [2024-11-20 16:20:28.902456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.109 [2024-11-20 16:20:28.902464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.109 [2024-11-20 16:20:28.902471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.109 [2024-11-20 16:20:28.902487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.109 qpair failed and we were unable to recover it. 00:27:28.109 [2024-11-20 16:20:28.912505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.109 [2024-11-20 16:20:28.912562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.109 [2024-11-20 16:20:28.912576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.109 [2024-11-20 16:20:28.912627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.109 [2024-11-20 16:20:28.912633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.109 [2024-11-20 16:20:28.912650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.109 qpair failed and we were unable to recover it. 00:27:28.109 [2024-11-20 16:20:28.922503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.109 [2024-11-20 16:20:28.922558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.109 [2024-11-20 16:20:28.922573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.109 [2024-11-20 16:20:28.922580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.109 [2024-11-20 16:20:28.922586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.109 [2024-11-20 16:20:28.922602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.109 qpair failed and we were unable to recover it. 00:27:28.109 [2024-11-20 16:20:28.932528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.109 [2024-11-20 16:20:28.932592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.109 [2024-11-20 16:20:28.932606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.109 [2024-11-20 16:20:28.932613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.109 [2024-11-20 16:20:28.932619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.109 [2024-11-20 16:20:28.932635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.109 qpair failed and we were unable to recover it. 00:27:28.368 [2024-11-20 16:20:28.942629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.368 [2024-11-20 16:20:28.942694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.368 [2024-11-20 16:20:28.942710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.368 [2024-11-20 16:20:28.942717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.368 [2024-11-20 16:20:28.942723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.368 [2024-11-20 16:20:28.942739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-11-20 16:20:28.952548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.368 [2024-11-20 16:20:28.952610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.368 [2024-11-20 16:20:28.952626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.368 [2024-11-20 16:20:28.952633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.368 [2024-11-20 16:20:28.952639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.368 [2024-11-20 16:20:28.952655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-11-20 16:20:28.962629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.368 [2024-11-20 16:20:28.962683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.368 [2024-11-20 16:20:28.962697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.368 [2024-11-20 16:20:28.962704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:28.962710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:28.962726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:28.972659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:28.972722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:28.972736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:28.972743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:28.972749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:28.972764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:28.982702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:28.982792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:28.982806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:28.982813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:28.982819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:28.982834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:28.992700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:28.992754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:28.992768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:28.992774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:28.992781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:28.992796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.002761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.002817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.002831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.002838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.002844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.002859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.012799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.012854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.012868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.012879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.012885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.012900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.022799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.022859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.022873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.022880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.022886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.022901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.032821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.032872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.032886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.032893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.032899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.032914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.042848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.042904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.042918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.042925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.042931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.042951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.052883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.052983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.052997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.053003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.053009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.053028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.062929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.063013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.063027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.063034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.063039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.063056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.072938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.073048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.073062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.073068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.073074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.073089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.082964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.083020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.083034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.083041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.369 [2024-11-20 16:20:29.083047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.369 [2024-11-20 16:20:29.083062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-11-20 16:20:29.093008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.369 [2024-11-20 16:20:29.093071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.369 [2024-11-20 16:20:29.093084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.369 [2024-11-20 16:20:29.093090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.093096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.093111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.103013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.103068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.103082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.103089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.103095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.103110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.113043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.113098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.113112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.113119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.113125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.113140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.123066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.123122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.123136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.123142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.123148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.123163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.133105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.133163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.133176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.133183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.133189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.133203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.143151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.143208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.143225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.143232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.143238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.143253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.153078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.153162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.153177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.153184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.153191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.153206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.163190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.163263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.163276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.163283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.163289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.163304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.173202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.173255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.173269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.173275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.173281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.173296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.183245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.183305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.183318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.183325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.183334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.183350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-11-20 16:20:29.193266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.370 [2024-11-20 16:20:29.193333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.370 [2024-11-20 16:20:29.193347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.370 [2024-11-20 16:20:29.193353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.370 [2024-11-20 16:20:29.193359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.370 [2024-11-20 16:20:29.193374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.203310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.203368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.203384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.203391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.203397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.203413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.213367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.213429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.213443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.213450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.213456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.213472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.223354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.223412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.223426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.223433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.223439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.223455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.233370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.233427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.233441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.233448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.233454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.233469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.243412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.243465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.243479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.243485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.243492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.243507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.253424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.253480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.253494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.253501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.253507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.253522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.263479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.263532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.263546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.263552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.263558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.263574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.273492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.273547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.273564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.273571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.273577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd850000b90 00:27:28.629 [2024-11-20 16:20:29.273592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.283526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.283622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.283680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.283706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.283728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd848000b90 00:27:28.629 [2024-11-20 16:20:29.283779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.293550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.293624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.293651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.293665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.293678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd848000b90 00:27:28.629 [2024-11-20 16:20:29.293708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.303615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.303722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.303778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.629 [2024-11-20 16:20:29.303803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.629 [2024-11-20 16:20:29.303825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd844000b90 00:27:28.629 [2024-11-20 16:20:29.303876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.629 qpair failed and we were unable to recover it. 00:27:28.629 [2024-11-20 16:20:29.313594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.629 [2024-11-20 16:20:29.313672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.629 [2024-11-20 16:20:29.313701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.630 [2024-11-20 16:20:29.313715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.630 [2024-11-20 16:20:29.313735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd844000b90 00:27:28.630 [2024-11-20 16:20:29.313766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.630 qpair failed and we were unable to recover it. 00:27:28.630 [2024-11-20 16:20:29.323842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.630 [2024-11-20 16:20:29.323972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.630 [2024-11-20 16:20:29.324034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.630 [2024-11-20 16:20:29.324061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.630 [2024-11-20 16:20:29.324084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7cba0 00:27:28.630 [2024-11-20 16:20:29.324133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.630 qpair failed and we were unable to recover it. 00:27:28.630 [2024-11-20 16:20:29.333669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.630 [2024-11-20 16:20:29.333740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.630 [2024-11-20 16:20:29.333769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.630 [2024-11-20 16:20:29.333783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.630 [2024-11-20 16:20:29.333796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b7cba0 00:27:28.630 [2024-11-20 16:20:29.333826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.630 qpair failed and we were unable to recover it. 00:27:28.630 [2024-11-20 16:20:29.333942] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:28.630 A controller has encountered a failure and is being reset. 00:27:28.630 [2024-11-20 16:20:29.334055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8aaf0 (9): Bad file descriptor 00:27:28.630 Controller properly reset. 00:27:28.630 Initializing NVMe Controllers 00:27:28.630 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:28.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:28.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:28.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:28.630 Initialization complete. Launching workers. 00:27:28.630 Starting thread on core 1 00:27:28.630 Starting thread on core 2 00:27:28.630 Starting thread on core 3 00:27:28.630 Starting thread on core 0 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:28.630 00:27:28.630 real 0m10.666s 00:27:28.630 user 0m19.365s 00:27:28.630 sys 0m4.837s 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.630 ************************************ 00:27:28.630 END TEST nvmf_target_disconnect_tc2 00:27:28.630 ************************************ 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.630 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.630 rmmod nvme_tcp 00:27:28.630 rmmod nvme_fabrics 00:27:28.630 rmmod nvme_keyring 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2895819 ']' 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2895819 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2895819 ']' 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2895819 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2895819 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2895819' 00:27:28.888 killing process with pid 2895819 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2895819 00:27:28.888 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2895819 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.147 16:20:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.053 16:20:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.053 00:27:31.053 real 0m19.462s 00:27:31.053 user 0m46.620s 00:27:31.053 sys 0m9.739s 00:27:31.053 16:20:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.053 16:20:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:31.053 ************************************ 00:27:31.053 END TEST nvmf_target_disconnect 00:27:31.053 ************************************ 00:27:31.053 16:20:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:31.053 00:27:31.053 real 5m51.058s 00:27:31.053 user 10m30.964s 00:27:31.053 sys 1m58.020s 00:27:31.053 16:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.053 16:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.053 ************************************ 00:27:31.053 END TEST nvmf_host 00:27:31.053 ************************************ 00:27:31.053 16:20:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:31.053 16:20:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:31.053 16:20:31 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:31.053 16:20:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:31.053 16:20:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.053 16:20:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.312 ************************************ 00:27:31.312 START TEST nvmf_target_core_interrupt_mode 00:27:31.312 ************************************ 00:27:31.312 16:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:31.312 * Looking for test storage... 00:27:31.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:31.312 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.313 --rc genhtml_branch_coverage=1 00:27:31.313 --rc genhtml_function_coverage=1 00:27:31.313 --rc genhtml_legend=1 00:27:31.313 --rc geninfo_all_blocks=1 00:27:31.313 --rc geninfo_unexecuted_blocks=1 00:27:31.313 00:27:31.313 ' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.313 --rc genhtml_branch_coverage=1 00:27:31.313 --rc genhtml_function_coverage=1 00:27:31.313 --rc genhtml_legend=1 00:27:31.313 --rc geninfo_all_blocks=1 00:27:31.313 --rc geninfo_unexecuted_blocks=1 00:27:31.313 00:27:31.313 ' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.313 --rc genhtml_branch_coverage=1 00:27:31.313 --rc genhtml_function_coverage=1 00:27:31.313 --rc genhtml_legend=1 00:27:31.313 --rc geninfo_all_blocks=1 00:27:31.313 --rc geninfo_unexecuted_blocks=1 00:27:31.313 00:27:31.313 ' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.313 --rc genhtml_branch_coverage=1 00:27:31.313 --rc genhtml_function_coverage=1 00:27:31.313 --rc genhtml_legend=1 00:27:31.313 --rc geninfo_all_blocks=1 00:27:31.313 --rc geninfo_unexecuted_blocks=1 00:27:31.313 00:27:31.313 ' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.313 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:31.574 ************************************ 00:27:31.574 START TEST nvmf_abort 00:27:31.574 ************************************ 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:31.574 * Looking for test storage... 00:27:31.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.574 --rc genhtml_branch_coverage=1 00:27:31.574 --rc genhtml_function_coverage=1 00:27:31.574 --rc genhtml_legend=1 00:27:31.574 --rc geninfo_all_blocks=1 00:27:31.574 --rc geninfo_unexecuted_blocks=1 00:27:31.574 00:27:31.574 ' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.574 --rc genhtml_branch_coverage=1 00:27:31.574 --rc genhtml_function_coverage=1 00:27:31.574 --rc genhtml_legend=1 00:27:31.574 --rc geninfo_all_blocks=1 00:27:31.574 --rc geninfo_unexecuted_blocks=1 00:27:31.574 00:27:31.574 ' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.574 --rc genhtml_branch_coverage=1 00:27:31.574 --rc genhtml_function_coverage=1 00:27:31.574 --rc genhtml_legend=1 00:27:31.574 --rc geninfo_all_blocks=1 00:27:31.574 --rc geninfo_unexecuted_blocks=1 00:27:31.574 00:27:31.574 ' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.574 --rc genhtml_branch_coverage=1 00:27:31.574 --rc genhtml_function_coverage=1 00:27:31.574 --rc genhtml_legend=1 00:27:31.574 --rc geninfo_all_blocks=1 00:27:31.574 --rc geninfo_unexecuted_blocks=1 00:27:31.574 00:27:31.574 ' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.574 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.575 16:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.144 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.145 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.145 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.145 16:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:27:38.145 00:27:38.145 --- 10.0.0.2 ping statistics --- 00:27:38.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.145 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:27:38.145 00:27:38.145 --- 10.0.0.1 ping statistics --- 00:27:38.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.145 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2900435 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2900435 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2900435 ']' 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.145 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.145 [2024-11-20 16:20:38.296840] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:38.145 [2024-11-20 16:20:38.297840] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:27:38.145 [2024-11-20 16:20:38.297880] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.145 [2024-11-20 16:20:38.378277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:38.145 [2024-11-20 16:20:38.421147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.145 [2024-11-20 16:20:38.421183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.145 [2024-11-20 16:20:38.421189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.145 [2024-11-20 16:20:38.421195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.145 [2024-11-20 16:20:38.421200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.145 [2024-11-20 16:20:38.422604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.145 [2024-11-20 16:20:38.422712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.145 [2024-11-20 16:20:38.422712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.145 [2024-11-20 16:20:38.491786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:38.145 [2024-11-20 16:20:38.492551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:38.145 [2024-11-20 16:20:38.492801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:38.146 [2024-11-20 16:20:38.492960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 [2024-11-20 16:20:38.559530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 Malloc0 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 Delay0 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 [2024-11-20 16:20:38.647424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.146 16:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:38.146 [2024-11-20 16:20:38.778691] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:40.685 Initializing NVMe Controllers 00:27:40.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:40.685 controller IO queue size 128 less than required 00:27:40.685 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:40.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:40.685 Initialization complete. Launching workers. 00:27:40.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36877 00:27:40.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36934, failed to submit 66 00:27:40.685 success 36877, unsuccessful 57, failed 0 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.685 16:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.685 rmmod nvme_tcp 00:27:40.685 rmmod nvme_fabrics 00:27:40.685 rmmod nvme_keyring 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2900435 ']' 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2900435 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2900435 ']' 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2900435 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900435 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900435' 00:27:40.685 killing process with pid 2900435 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2900435 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2900435 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.685 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.686 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.686 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.686 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.686 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.686 16:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.591 00:27:42.591 real 0m11.197s 00:27:42.591 user 0m10.884s 00:27:42.591 sys 0m5.602s 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:42.591 ************************************ 00:27:42.591 END TEST nvmf_abort 00:27:42.591 ************************************ 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:42.591 ************************************ 00:27:42.591 START TEST nvmf_ns_hotplug_stress 00:27:42.591 ************************************ 00:27:42.591 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:42.852 * Looking for test storage... 00:27:42.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.852 --rc genhtml_branch_coverage=1 00:27:42.852 --rc genhtml_function_coverage=1 00:27:42.852 --rc genhtml_legend=1 00:27:42.852 --rc geninfo_all_blocks=1 00:27:42.852 --rc geninfo_unexecuted_blocks=1 00:27:42.852 00:27:42.852 ' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.852 --rc genhtml_branch_coverage=1 00:27:42.852 --rc genhtml_function_coverage=1 00:27:42.852 --rc genhtml_legend=1 00:27:42.852 --rc geninfo_all_blocks=1 00:27:42.852 --rc geninfo_unexecuted_blocks=1 00:27:42.852 00:27:42.852 ' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.852 --rc genhtml_branch_coverage=1 00:27:42.852 --rc genhtml_function_coverage=1 00:27:42.852 --rc genhtml_legend=1 00:27:42.852 --rc geninfo_all_blocks=1 00:27:42.852 --rc geninfo_unexecuted_blocks=1 00:27:42.852 00:27:42.852 ' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.852 --rc genhtml_branch_coverage=1 00:27:42.852 --rc genhtml_function_coverage=1 00:27:42.852 --rc genhtml_legend=1 00:27:42.852 --rc geninfo_all_blocks=1 00:27:42.852 --rc geninfo_unexecuted_blocks=1 00:27:42.852 00:27:42.852 ' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:42.852 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.853 16:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.423 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:49.424 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:49.424 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:49.424 Found net devices under 0000:86:00.0: cvl_0_0 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:49.424 Found net devices under 0000:86:00.1: cvl_0_1 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.424 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:27:49.424 00:27:49.424 --- 10.0.0.2 ping statistics --- 00:27:49.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.425 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:49.425 00:27:49.425 --- 10.0.0.1 ping statistics --- 00:27:49.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.425 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2904434 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2904434 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2904434 ']' 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:49.425 [2024-11-20 16:20:49.562855] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:49.425 [2024-11-20 16:20:49.563742] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:27:49.425 [2024-11-20 16:20:49.563774] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.425 [2024-11-20 16:20:49.642748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:49.425 [2024-11-20 16:20:49.684509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.425 [2024-11-20 16:20:49.684544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.425 [2024-11-20 16:20:49.684551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.425 [2024-11-20 16:20:49.684557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.425 [2024-11-20 16:20:49.684562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.425 [2024-11-20 16:20:49.685851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.425 [2024-11-20 16:20:49.685975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.425 [2024-11-20 16:20:49.685977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.425 [2024-11-20 16:20:49.753435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:49.425 [2024-11-20 16:20:49.754212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:49.425 [2024-11-20 16:20:49.754559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:49.425 [2024-11-20 16:20:49.754666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:49.425 16:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:49.425 [2024-11-20 16:20:50.002750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.425 16:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:49.425 16:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.684 [2024-11-20 16:20:50.387135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.684 16:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:49.943 16:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:50.201 Malloc0 00:27:50.201 16:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:50.201 Delay0 00:27:50.201 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.459 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:50.717 NULL1 00:27:50.717 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:50.974 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2904700 00:27:50.974 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:50.974 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:50.974 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.974 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.232 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:51.232 16:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:51.489 true 00:27:51.489 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:51.490 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.747 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.006 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:52.006 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:52.006 true 00:27:52.006 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:52.006 16:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.378 Read completed with error (sct=0, sc=11) 00:27:53.378 16:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.378 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:53.378 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:53.378 true 00:27:53.636 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:53.636 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.636 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.933 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:53.933 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:54.225 true 00:27:54.225 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:54.225 16:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.170 16:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.428 16:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:55.428 16:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:55.686 true 00:27:55.686 16:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:55.686 16:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.508 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.508 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:56.508 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:56.766 true 00:27:56.766 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:56.766 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.024 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.281 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:57.281 16:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:57.281 true 00:27:57.281 16:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:57.281 16:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.650 16:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.907 16:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:58.907 16:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:58.907 true 00:27:58.907 16:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:27:58.907 16:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.838 16:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.095 16:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:00.095 16:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:00.352 true 00:28:00.352 16:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:00.352 16:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.352 16:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.609 16:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:00.609 16:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:00.866 true 00:28:00.866 16:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:00.866 16:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.796 16:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.053 16:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:02.053 16:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:02.311 true 00:28:02.311 16:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:02.311 16:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.243 16:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.501 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:03.502 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:03.502 true 00:28:03.502 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:03.502 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.758 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.015 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:04.015 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:04.273 true 00:28:04.273 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:04.273 16:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.206 16:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.464 16:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:05.464 16:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:05.722 true 00:28:05.722 16:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:05.722 16:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.653 16:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.653 16:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:06.653 16:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:06.911 true 00:28:06.911 16:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:06.911 16:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.169 16:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.427 16:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:07.427 16:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:07.427 true 00:28:07.427 16:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:07.427 16:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.800 16:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.800 16:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:08.800 16:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:09.058 true 00:28:09.058 16:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:09.058 16:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.989 16:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.989 16:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:09.989 16:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:10.247 true 00:28:10.247 16:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:10.247 16:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.505 16:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.763 16:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:10.763 16:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:10.763 true 00:28:10.763 16:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:10.763 16:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.136 16:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.136 16:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:12.136 16:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:12.136 true 00:28:12.136 16:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:12.136 16:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.394 16:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.651 16:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:12.651 16:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:12.909 true 00:28:12.909 16:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:12.909 16:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.102 16:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.102 16:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:14.102 16:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:14.359 true 00:28:14.359 16:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:14.359 16:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.292 16:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.550 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:15.550 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:15.550 true 00:28:15.551 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:15.551 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.809 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.066 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:16.066 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:16.324 true 00:28:16.324 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:16.324 16:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.257 16:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.514 16:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:17.514 16:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:17.772 true 00:28:17.772 16:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:17.772 16:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.705 16:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.705 16:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:18.705 16:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:18.963 true 00:28:18.963 16:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:18.963 16:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.221 16:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.478 16:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:19.478 16:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:19.478 true 00:28:19.478 16:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:19.478 16:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.851 16:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.108 16:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:21.108 16:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:21.108 true 00:28:21.108 16:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:21.108 16:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.042 16:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.042 Initializing NVMe Controllers 00:28:22.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.042 Controller IO queue size 128, less than required. 00:28:22.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.042 Controller IO queue size 128, less than required. 00:28:22.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:22.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:22.042 Initialization complete. Launching workers. 00:28:22.042 ======================================================== 00:28:22.042 Latency(us) 00:28:22.042 Device Information : IOPS MiB/s Average min max 00:28:22.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2088.13 1.02 42311.16 2555.81 1013155.71 00:28:22.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17168.06 8.38 7436.82 1586.42 381186.65 00:28:22.042 ======================================================== 00:28:22.042 Total : 19256.19 9.40 11218.57 1586.42 1013155.71 00:28:22.042 00:28:22.300 16:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:22.300 16:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:22.300 true 00:28:22.300 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2904700 00:28:22.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2904700) - No such process 00:28:22.300 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2904700 00:28:22.300 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.557 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.815 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:22.815 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:22.815 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:22.815 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:22.815 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:23.074 null0 00:28:23.074 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:23.074 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:23.074 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:23.074 null1 00:28:23.074 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:23.074 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:23.074 16:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:23.332 null2 00:28:23.332 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:23.332 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:23.332 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:23.590 null3 00:28:23.590 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:23.590 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:23.590 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:23.590 null4 00:28:23.849 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:23.849 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:23.849 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:23.849 null5 00:28:23.849 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:23.849 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:23.849 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:24.107 null6 00:28:24.107 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:24.107 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:24.107 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:24.366 null7 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:24.366 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2910635 2910636 2910637 2910639 2910640 2910643 2910645 2910648 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.367 16:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:24.367 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:24.367 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:24.367 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.367 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:24.367 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:24.367 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:24.625 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.626 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:24.884 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.142 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.143 16:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:25.401 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:25.401 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.402 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:25.660 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:25.920 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:26.178 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:26.178 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.178 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:26.179 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:26.179 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:26.179 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:26.179 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:26.179 16:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:26.437 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.695 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.696 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:26.954 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.213 16:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.471 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:27.729 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:27.730 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:27.988 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.989 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:28.247 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:28.247 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:28.247 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:28.247 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:28.247 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:28.248 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:28.248 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.248 16:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:28.248 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.248 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.506 rmmod nvme_tcp 00:28:28.506 rmmod nvme_fabrics 00:28:28.506 rmmod nvme_keyring 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2904434 ']' 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2904434 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2904434 ']' 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2904434 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904434 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904434' 00:28:28.506 killing process with pid 2904434 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2904434 00:28:28.506 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2904434 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.765 16:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.301 00:28:31.301 real 0m48.136s 00:28:31.301 user 3m2.554s 00:28:31.301 sys 0m20.926s 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:31.301 ************************************ 00:28:31.301 END TEST nvmf_ns_hotplug_stress 00:28:31.301 ************************************ 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:31.301 ************************************ 00:28:31.301 START TEST nvmf_delete_subsystem 00:28:31.301 ************************************ 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:31.301 * Looking for test storage... 00:28:31.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.301 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.302 --rc genhtml_branch_coverage=1 00:28:31.302 --rc genhtml_function_coverage=1 00:28:31.302 --rc genhtml_legend=1 00:28:31.302 --rc geninfo_all_blocks=1 00:28:31.302 --rc geninfo_unexecuted_blocks=1 00:28:31.302 00:28:31.302 ' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.302 --rc genhtml_branch_coverage=1 00:28:31.302 --rc genhtml_function_coverage=1 00:28:31.302 --rc genhtml_legend=1 00:28:31.302 --rc geninfo_all_blocks=1 00:28:31.302 --rc geninfo_unexecuted_blocks=1 00:28:31.302 00:28:31.302 ' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.302 --rc genhtml_branch_coverage=1 00:28:31.302 --rc genhtml_function_coverage=1 00:28:31.302 --rc genhtml_legend=1 00:28:31.302 --rc geninfo_all_blocks=1 00:28:31.302 --rc geninfo_unexecuted_blocks=1 00:28:31.302 00:28:31.302 ' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.302 --rc genhtml_branch_coverage=1 00:28:31.302 --rc genhtml_function_coverage=1 00:28:31.302 --rc genhtml_legend=1 00:28:31.302 --rc geninfo_all_blocks=1 00:28:31.302 --rc geninfo_unexecuted_blocks=1 00:28:31.302 00:28:31.302 ' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.302 16:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:37.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:37.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.874 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:37.875 Found net devices under 0000:86:00.0: cvl_0_0 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:37.875 Found net devices under 0000:86:00.1: cvl_0_1 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:28:37.875 00:28:37.875 --- 10.0.0.2 ping statistics --- 00:28:37.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.875 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:28:37.875 00:28:37.875 --- 10.0.0.1 ping statistics --- 00:28:37.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.875 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2914923 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2914923 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2914923 ']' 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.875 16:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 [2024-11-20 16:21:37.788430] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:37.876 [2024-11-20 16:21:37.789435] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:28:37.876 [2024-11-20 16:21:37.789474] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.876 [2024-11-20 16:21:37.871412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:37.876 [2024-11-20 16:21:37.915661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.876 [2024-11-20 16:21:37.915699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.876 [2024-11-20 16:21:37.915707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.876 [2024-11-20 16:21:37.915714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.876 [2024-11-20 16:21:37.915719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.876 [2024-11-20 16:21:37.916872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.876 [2024-11-20 16:21:37.916873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.876 [2024-11-20 16:21:37.986220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:37.876 [2024-11-20 16:21:37.986826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:37.876 [2024-11-20 16:21:37.986999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 [2024-11-20 16:21:38.061674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 [2024-11-20 16:21:38.090077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 NULL1 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 Delay0 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2915118 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:37.876 16:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:37.876 [2024-11-20 16:21:38.183462] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:39.775 16:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.775 16:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.775 16:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 starting I/O failed: -6 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 starting I/O failed: -6 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 starting I/O failed: -6 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 starting I/O failed: -6 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 Write completed with error (sct=0, sc=8) 00:28:39.775 starting I/O failed: -6 00:28:39.775 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 [2024-11-20 16:21:40.401812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd2c0 is same with the state(6) to be set 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.776 starting I/O failed: -6 00:28:39.776 Write completed with error (sct=0, sc=8) 00:28:39.776 Read completed with error (sct=0, sc=8) 00:28:39.777 starting I/O failed: -6 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 starting I/O failed: -6 00:28:39.777 Write completed with error (sct=0, sc=8) 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 starting I/O failed: -6 00:28:39.777 Write completed with error (sct=0, sc=8) 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 starting I/O failed: -6 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 starting I/O failed: -6 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 Read completed with error (sct=0, sc=8) 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:39.777 starting I/O failed: -6 00:28:40.713 [2024-11-20 16:21:41.361933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dce9a0 is same with the state(6) to be set 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 [2024-11-20 16:21:41.404491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd860 is same with the state(6) to be set 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 [2024-11-20 16:21:41.404630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd4a0 is same with the state(6) to be set 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 [2024-11-20 16:21:41.405063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc21c00d020 is same with the state(6) to be set 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Write completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 Read completed with error (sct=0, sc=8) 00:28:40.713 [2024-11-20 16:21:41.405789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc21c00d800 is same with the state(6) to be set 00:28:40.713 Initializing NVMe Controllers 00:28:40.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.713 Controller IO queue size 128, less than required. 00:28:40.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:40.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:40.713 Initialization complete. Launching workers. 00:28:40.713 ======================================================== 00:28:40.713 Latency(us) 00:28:40.713 Device Information : IOPS MiB/s Average min max 00:28:40.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.19 0.08 904140.99 442.67 1013697.27 00:28:40.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.56 0.09 918226.43 437.17 1013700.19 00:28:40.713 ======================================================== 00:28:40.713 Total : 348.76 0.17 911514.31 437.17 1013700.19 00:28:40.713 00:28:40.713 [2024-11-20 16:21:41.406387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dce9a0 (9): Bad file descriptor 00:28:40.714 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:40.714 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:40.714 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2915118 00:28:40.714 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2915118 00:28:41.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2915118) - No such process 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2915118 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2915118 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2915118 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.281 [2024-11-20 16:21:41.938024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2915632 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:41.281 16:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:41.281 [2024-11-20 16:21:42.025154] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:41.846 16:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:41.846 16:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:41.846 16:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:42.411 16:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:42.411 16:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:42.411 16:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:42.669 16:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:42.669 16:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:42.669 16:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:43.235 16:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:43.235 16:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:43.235 16:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:43.800 16:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:43.800 16:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:43.800 16:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:44.462 16:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:44.462 16:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:44.462 16:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:44.462 Initializing NVMe Controllers 00:28:44.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.462 Controller IO queue size 128, less than required. 00:28:44.462 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:44.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:44.462 Initialization complete. Launching workers. 00:28:44.462 ======================================================== 00:28:44.462 Latency(us) 00:28:44.462 Device Information : IOPS MiB/s Average min max 00:28:44.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002171.43 1000142.81 1006046.79 00:28:44.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004250.02 1000163.07 1010606.15 00:28:44.462 ======================================================== 00:28:44.462 Total : 256.00 0.12 1003210.73 1000142.81 1010606.15 00:28:44.462 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2915632 00:28:44.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2915632) - No such process 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2915632 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:44.749 rmmod nvme_tcp 00:28:44.749 rmmod nvme_fabrics 00:28:44.749 rmmod nvme_keyring 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2914923 ']' 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2914923 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2914923 ']' 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2914923 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.749 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914923 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914923' 00:28:45.015 killing process with pid 2914923 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2914923 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2914923 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.015 16:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:47.554 00:28:47.554 real 0m16.207s 00:28:47.554 user 0m26.282s 00:28:47.554 sys 0m6.114s 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.554 ************************************ 00:28:47.554 END TEST nvmf_delete_subsystem 00:28:47.554 ************************************ 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:47.554 ************************************ 00:28:47.554 START TEST nvmf_host_management 00:28:47.554 ************************************ 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:47.554 * Looking for test storage... 00:28:47.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:47.554 16:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:47.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.554 --rc genhtml_branch_coverage=1 00:28:47.554 --rc genhtml_function_coverage=1 00:28:47.554 --rc genhtml_legend=1 00:28:47.554 --rc geninfo_all_blocks=1 00:28:47.554 --rc geninfo_unexecuted_blocks=1 00:28:47.554 00:28:47.554 ' 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:47.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.554 --rc genhtml_branch_coverage=1 00:28:47.554 --rc genhtml_function_coverage=1 00:28:47.554 --rc genhtml_legend=1 00:28:47.554 --rc geninfo_all_blocks=1 00:28:47.554 --rc geninfo_unexecuted_blocks=1 00:28:47.554 00:28:47.554 ' 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:47.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.554 --rc genhtml_branch_coverage=1 00:28:47.554 --rc genhtml_function_coverage=1 00:28:47.554 --rc genhtml_legend=1 00:28:47.554 --rc geninfo_all_blocks=1 00:28:47.554 --rc geninfo_unexecuted_blocks=1 00:28:47.554 00:28:47.554 ' 00:28:47.554 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:47.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.555 --rc genhtml_branch_coverage=1 00:28:47.555 --rc genhtml_function_coverage=1 00:28:47.555 --rc genhtml_legend=1 00:28:47.555 --rc geninfo_all_blocks=1 00:28:47.555 --rc geninfo_unexecuted_blocks=1 00:28:47.555 00:28:47.555 ' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.555 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:47.556 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:47.556 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.556 16:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:54.126 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:54.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:54.127 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:54.127 Found net devices under 0000:86:00.0: cvl_0_0 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:54.127 Found net devices under 0000:86:00.1: cvl_0_1 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.127 16:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:28:54.127 00:28:54.127 --- 10.0.0.2 ping statistics --- 00:28:54.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.127 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:54.127 00:28:54.127 --- 10.0.0.1 ping statistics --- 00:28:54.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.127 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2919779 00:28:54.127 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2919779 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2919779 ']' 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 [2024-11-20 16:21:54.120120] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:54.128 [2024-11-20 16:21:54.121077] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:28:54.128 [2024-11-20 16:21:54.121114] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.128 [2024-11-20 16:21:54.198573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.128 [2024-11-20 16:21:54.241784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.128 [2024-11-20 16:21:54.241823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.128 [2024-11-20 16:21:54.241830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.128 [2024-11-20 16:21:54.241836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.128 [2024-11-20 16:21:54.241841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.128 [2024-11-20 16:21:54.243324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.128 [2024-11-20 16:21:54.243436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.128 [2024-11-20 16:21:54.243540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.128 [2024-11-20 16:21:54.243541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:54.128 [2024-11-20 16:21:54.311720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:54.128 [2024-11-20 16:21:54.312721] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:54.128 [2024-11-20 16:21:54.312736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:54.128 [2024-11-20 16:21:54.313154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:54.128 [2024-11-20 16:21:54.313193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 [2024-11-20 16:21:54.380358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 Malloc0 00:28:54.128 [2024-11-20 16:21:54.468574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2919888 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2919888 /var/tmp/bdevperf.sock 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2919888 ']' 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.128 { 00:28:54.128 "params": { 00:28:54.128 "name": "Nvme$subsystem", 00:28:54.128 "trtype": "$TEST_TRANSPORT", 00:28:54.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.128 "adrfam": "ipv4", 00:28:54.128 "trsvcid": "$NVMF_PORT", 00:28:54.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.128 "hdgst": ${hdgst:-false}, 00:28:54.128 "ddgst": ${ddgst:-false} 00:28:54.128 }, 00:28:54.128 "method": "bdev_nvme_attach_controller" 00:28:54.128 } 00:28:54.128 EOF 00:28:54.128 )") 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:54.128 16:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:54.128 "params": { 00:28:54.128 "name": "Nvme0", 00:28:54.128 "trtype": "tcp", 00:28:54.128 "traddr": "10.0.0.2", 00:28:54.128 "adrfam": "ipv4", 00:28:54.128 "trsvcid": "4420", 00:28:54.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:54.128 "hdgst": false, 00:28:54.128 "ddgst": false 00:28:54.128 }, 00:28:54.128 "method": "bdev_nvme_attach_controller" 00:28:54.128 }' 00:28:54.128 [2024-11-20 16:21:54.565382] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:28:54.128 [2024-11-20 16:21:54.565435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919888 ] 00:28:54.128 [2024-11-20 16:21:54.643276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.128 [2024-11-20 16:21:54.684867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.386 Running I/O for 10 seconds... 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.644 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=846 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 846 -ge 100 ']' 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.903 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.903 [2024-11-20 16:21:55.488055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.903 [2024-11-20 16:21:55.488157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d70 is same with the state(6) to be set 00:28:54.904 [2024-11-20 16:21:55.491005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.904 [2024-11-20 16:21:55.491039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.491049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.904 [2024-11-20 16:21:55.491057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.491065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.904 [2024-11-20 16:21:55.491072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.491079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.904 [2024-11-20 16:21:55.491086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.491093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bb500 is same with the state(6) to be set 00:28:54.904 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.904 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:54.904 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.904 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:54.904 [2024-11-20 16:21:55.498839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.498992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.498999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.904 [2024-11-20 16:21:55.499339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.904 [2024-11-20 16:21:55.499348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.499832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.905 [2024-11-20 16:21:55.499841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.905 [2024-11-20 16:21:55.500792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:54.905 task offset: 122880 on job bdev=Nvme0n1 fails 00:28:54.905 00:28:54.905 Latency(us) 00:28:54.905 [2024-11-20T15:21:55.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.905 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.905 Job: Nvme0n1 ended in about 0.49 seconds with error 00:28:54.905 Verification LBA range: start 0x0 length 0x400 00:28:54.905 Nvme0n1 : 0.49 1944.23 121.51 129.62 0.00 30126.46 1424.70 27696.08 00:28:54.905 [2024-11-20T15:21:55.742Z] =================================================================================================================== 00:28:54.905 [2024-11-20T15:21:55.742Z] Total : 1944.23 121.51 129.62 0.00 30126.46 1424.70 27696.08 00:28:54.905 [2024-11-20 16:21:55.503191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:54.905 [2024-11-20 16:21:55.503213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bb500 (9): Bad file descriptor 00:28:54.905 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.905 16:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:54.905 [2024-11-20 16:21:55.506077] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2919888 00:28:55.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2919888) - No such process 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.838 { 00:28:55.838 "params": { 00:28:55.838 "name": "Nvme$subsystem", 00:28:55.838 "trtype": "$TEST_TRANSPORT", 00:28:55.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.838 "adrfam": "ipv4", 00:28:55.838 "trsvcid": "$NVMF_PORT", 00:28:55.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.838 "hdgst": ${hdgst:-false}, 00:28:55.838 "ddgst": ${ddgst:-false} 00:28:55.838 }, 00:28:55.838 "method": "bdev_nvme_attach_controller" 00:28:55.838 } 00:28:55.838 EOF 00:28:55.838 )") 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:55.838 16:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.838 "params": { 00:28:55.838 "name": "Nvme0", 00:28:55.838 "trtype": "tcp", 00:28:55.838 "traddr": "10.0.0.2", 00:28:55.838 "adrfam": "ipv4", 00:28:55.838 "trsvcid": "4420", 00:28:55.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.838 "hdgst": false, 00:28:55.838 "ddgst": false 00:28:55.838 }, 00:28:55.838 "method": "bdev_nvme_attach_controller" 00:28:55.838 }' 00:28:55.838 [2024-11-20 16:21:56.560018] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:28:55.838 [2024-11-20 16:21:56.560062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920138 ] 00:28:55.838 [2024-11-20 16:21:56.635725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.097 [2024-11-20 16:21:56.677256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.354 Running I/O for 1 seconds... 00:28:57.286 1948.00 IOPS, 121.75 MiB/s 00:28:57.286 Latency(us) 00:28:57.286 [2024-11-20T15:21:58.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.286 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.286 Verification LBA range: start 0x0 length 0x400 00:28:57.286 Nvme0n1 : 1.01 1989.61 124.35 0.00 0.00 31528.68 3063.10 27810.06 00:28:57.286 [2024-11-20T15:21:58.123Z] =================================================================================================================== 00:28:57.286 [2024-11-20T15:21:58.123Z] Total : 1989.61 124.35 0.00 0.00 31528.68 3063.10 27810.06 00:28:57.544 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:57.544 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:57.544 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:57.544 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.544 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.545 rmmod nvme_tcp 00:28:57.545 rmmod nvme_fabrics 00:28:57.545 rmmod nvme_keyring 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2919779 ']' 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2919779 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2919779 ']' 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2919779 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2919779 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2919779' 00:28:57.545 killing process with pid 2919779 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2919779 00:28:57.545 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2919779 00:28:57.804 [2024-11-20 16:21:58.466879] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.804 16:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:00.340 00:29:00.340 real 0m12.665s 00:29:00.340 user 0m19.337s 00:29:00.340 sys 0m6.448s 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:00.340 ************************************ 00:29:00.340 END TEST nvmf_host_management 00:29:00.340 ************************************ 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:00.340 ************************************ 00:29:00.340 START TEST nvmf_lvol 00:29:00.340 ************************************ 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:00.340 * Looking for test storage... 00:29:00.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:00.340 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.341 --rc genhtml_branch_coverage=1 00:29:00.341 --rc genhtml_function_coverage=1 00:29:00.341 --rc genhtml_legend=1 00:29:00.341 --rc geninfo_all_blocks=1 00:29:00.341 --rc geninfo_unexecuted_blocks=1 00:29:00.341 00:29:00.341 ' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.341 --rc genhtml_branch_coverage=1 00:29:00.341 --rc genhtml_function_coverage=1 00:29:00.341 --rc genhtml_legend=1 00:29:00.341 --rc geninfo_all_blocks=1 00:29:00.341 --rc geninfo_unexecuted_blocks=1 00:29:00.341 00:29:00.341 ' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.341 --rc genhtml_branch_coverage=1 00:29:00.341 --rc genhtml_function_coverage=1 00:29:00.341 --rc genhtml_legend=1 00:29:00.341 --rc geninfo_all_blocks=1 00:29:00.341 --rc geninfo_unexecuted_blocks=1 00:29:00.341 00:29:00.341 ' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.341 --rc genhtml_branch_coverage=1 00:29:00.341 --rc genhtml_function_coverage=1 00:29:00.341 --rc genhtml_legend=1 00:29:00.341 --rc geninfo_all_blocks=1 00:29:00.341 --rc geninfo_unexecuted_blocks=1 00:29:00.341 00:29:00.341 ' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.341 16:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:06.913 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.913 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.913 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.913 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:06.914 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:06.914 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:06.914 Found net devices under 0000:86:00.0: cvl_0_0 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:06.914 Found net devices under 0000:86:00.1: cvl_0_1 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:29:06.914 00:29:06.914 --- 10.0.0.2 ping statistics --- 00:29:06.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.914 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:29:06.914 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:06.914 00:29:06.915 --- 10.0.0.1 ping statistics --- 00:29:06.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.915 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2923918 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2923918 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2923918 ']' 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.915 16:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 [2024-11-20 16:22:06.824164] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:06.915 [2024-11-20 16:22:06.825071] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:06.915 [2024-11-20 16:22:06.825102] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.915 [2024-11-20 16:22:06.890996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.915 [2024-11-20 16:22:06.934745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.915 [2024-11-20 16:22:06.934781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.915 [2024-11-20 16:22:06.934788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.915 [2024-11-20 16:22:06.934794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.915 [2024-11-20 16:22:06.934799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.915 [2024-11-20 16:22:06.936106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.915 [2024-11-20 16:22:06.936150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.915 [2024-11-20 16:22:06.936151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.915 [2024-11-20 16:22:07.004227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:06.915 [2024-11-20 16:22:07.004812] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:06.915 [2024-11-20 16:22:07.004940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:06.915 [2024-11-20 16:22:07.005172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:06.915 [2024-11-20 16:22:07.252841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:06.915 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:07.173 16:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:07.432 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b3ddadf8-d150-4d4c-839a-1d27c35ac736 00:29:07.432 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3ddadf8-d150-4d4c-839a-1d27c35ac736 lvol 20 00:29:07.691 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ae7c521f-4acc-403b-9f46-b9e7d2f1a85e 00:29:07.691 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:07.691 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae7c521f-4acc-403b-9f46-b9e7d2f1a85e 00:29:07.950 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:08.209 [2024-11-20 16:22:08.884765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.209 16:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:08.468 16:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:08.468 16:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2924404 00:29:08.468 16:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:09.401 16:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ae7c521f-4acc-403b-9f46-b9e7d2f1a85e MY_SNAPSHOT 00:29:09.660 16:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5d95d54e-dd51-4f11-ba78-16c3ce53285b 00:29:09.660 16:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ae7c521f-4acc-403b-9f46-b9e7d2f1a85e 30 00:29:09.918 16:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5d95d54e-dd51-4f11-ba78-16c3ce53285b MY_CLONE 00:29:10.177 16:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6cb9e063-d600-4180-8c02-85cc6cf54320 00:29:10.177 16:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6cb9e063-d600-4180-8c02-85cc6cf54320 00:29:10.743 16:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2924404 00:29:18.848 Initializing NVMe Controllers 00:29:18.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:18.848 Controller IO queue size 128, less than required. 00:29:18.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:18.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:18.848 Initialization complete. Launching workers. 00:29:18.848 ======================================================== 00:29:18.848 Latency(us) 00:29:18.848 Device Information : IOPS MiB/s Average min max 00:29:18.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11978.61 46.79 10689.58 2189.11 57451.96 00:29:18.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11902.51 46.49 10759.01 3876.30 61642.33 00:29:18.848 ======================================================== 00:29:18.848 Total : 23881.11 93.29 10724.19 2189.11 61642.33 00:29:18.848 00:29:18.848 16:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:18.848 16:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae7c521f-4acc-403b-9f46-b9e7d2f1a85e 00:29:19.106 16:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3ddadf8-d150-4d4c-839a-1d27c35ac736 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.364 rmmod nvme_tcp 00:29:19.364 rmmod nvme_fabrics 00:29:19.364 rmmod nvme_keyring 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2923918 ']' 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2923918 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2923918 ']' 00:29:19.364 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2923918 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2923918 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2923918' 00:29:19.365 killing process with pid 2923918 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2923918 00:29:19.365 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2923918 00:29:19.622 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.623 16:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.157 00:29:22.157 real 0m21.818s 00:29:22.157 user 0m55.484s 00:29:22.157 sys 0m9.775s 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:22.157 ************************************ 00:29:22.157 END TEST nvmf_lvol 00:29:22.157 ************************************ 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:22.157 ************************************ 00:29:22.157 START TEST nvmf_lvs_grow 00:29:22.157 ************************************ 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:22.157 * Looking for test storage... 00:29:22.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.157 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.158 --rc genhtml_branch_coverage=1 00:29:22.158 --rc genhtml_function_coverage=1 00:29:22.158 --rc genhtml_legend=1 00:29:22.158 --rc geninfo_all_blocks=1 00:29:22.158 --rc geninfo_unexecuted_blocks=1 00:29:22.158 00:29:22.158 ' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.158 --rc genhtml_branch_coverage=1 00:29:22.158 --rc genhtml_function_coverage=1 00:29:22.158 --rc genhtml_legend=1 00:29:22.158 --rc geninfo_all_blocks=1 00:29:22.158 --rc geninfo_unexecuted_blocks=1 00:29:22.158 00:29:22.158 ' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.158 --rc genhtml_branch_coverage=1 00:29:22.158 --rc genhtml_function_coverage=1 00:29:22.158 --rc genhtml_legend=1 00:29:22.158 --rc geninfo_all_blocks=1 00:29:22.158 --rc geninfo_unexecuted_blocks=1 00:29:22.158 00:29:22.158 ' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.158 --rc genhtml_branch_coverage=1 00:29:22.158 --rc genhtml_function_coverage=1 00:29:22.158 --rc genhtml_legend=1 00:29:22.158 --rc geninfo_all_blocks=1 00:29:22.158 --rc geninfo_unexecuted_blocks=1 00:29:22.158 00:29:22.158 ' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.158 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.159 16:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:28.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:28.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.731 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:28.732 Found net devices under 0000:86:00.0: cvl_0_0 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:28.732 Found net devices under 0000:86:00.1: cvl_0_1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:29:28.732 00:29:28.732 --- 10.0.0.2 ping statistics --- 00:29:28.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.732 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:29:28.732 00:29:28.732 --- 10.0.0.1 ping statistics --- 00:29:28.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.732 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2929551 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2929551 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2929551 ']' 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.732 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.733 [2024-11-20 16:22:28.663266] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:28.733 [2024-11-20 16:22:28.664183] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:28.733 [2024-11-20 16:22:28.664216] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.733 [2024-11-20 16:22:28.743087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.733 [2024-11-20 16:22:28.785192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.733 [2024-11-20 16:22:28.785232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.733 [2024-11-20 16:22:28.785239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.733 [2024-11-20 16:22:28.785245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.733 [2024-11-20 16:22:28.785250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.733 [2024-11-20 16:22:28.785792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.733 [2024-11-20 16:22:28.853391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:28.733 [2024-11-20 16:22:28.853606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.733 16:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:28.733 [2024-11-20 16:22:29.086497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.733 ************************************ 00:29:28.733 START TEST lvs_grow_clean 00:29:28.733 ************************************ 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:28.733 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:28.993 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:28.993 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:28.993 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:28.993 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:28.993 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:28.993 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 lvol 150 00:29:29.252 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=92f559aa-9bf7-48cb-83c3-b693ec05e779 00:29:29.252 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:29.252 16:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:29.511 [2024-11-20 16:22:30.154191] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:29.511 [2024-11-20 16:22:30.154321] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:29.511 true 00:29:29.511 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:29.511 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:29.770 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:29.770 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:29.770 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92f559aa-9bf7-48cb-83c3-b693ec05e779 00:29:30.030 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:30.289 [2024-11-20 16:22:30.946685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.289 16:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2930036 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2930036 /var/tmp/bdevperf.sock 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2930036 ']' 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:30.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.549 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.549 [2024-11-20 16:22:31.177564] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:30.549 [2024-11-20 16:22:31.177612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930036 ] 00:29:30.549 [2024-11-20 16:22:31.253294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.549 [2024-11-20 16:22:31.296094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.809 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.809 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:30.809 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:31.067 Nvme0n1 00:29:31.068 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:31.326 [ 00:29:31.326 { 00:29:31.326 "name": "Nvme0n1", 00:29:31.326 "aliases": [ 00:29:31.326 "92f559aa-9bf7-48cb-83c3-b693ec05e779" 00:29:31.326 ], 00:29:31.326 "product_name": "NVMe disk", 00:29:31.326 "block_size": 4096, 00:29:31.326 "num_blocks": 38912, 00:29:31.326 "uuid": "92f559aa-9bf7-48cb-83c3-b693ec05e779", 00:29:31.326 "numa_id": 1, 00:29:31.326 "assigned_rate_limits": { 00:29:31.326 "rw_ios_per_sec": 0, 00:29:31.326 "rw_mbytes_per_sec": 0, 00:29:31.326 "r_mbytes_per_sec": 0, 00:29:31.326 "w_mbytes_per_sec": 0 00:29:31.326 }, 00:29:31.326 "claimed": false, 00:29:31.326 "zoned": false, 00:29:31.326 "supported_io_types": { 00:29:31.326 "read": true, 00:29:31.326 "write": true, 00:29:31.326 "unmap": true, 00:29:31.326 "flush": true, 00:29:31.326 "reset": true, 00:29:31.326 "nvme_admin": true, 00:29:31.326 "nvme_io": true, 00:29:31.326 "nvme_io_md": false, 00:29:31.326 "write_zeroes": true, 00:29:31.326 "zcopy": false, 00:29:31.326 "get_zone_info": false, 00:29:31.326 "zone_management": false, 00:29:31.326 "zone_append": false, 00:29:31.326 "compare": true, 00:29:31.327 "compare_and_write": true, 00:29:31.327 "abort": true, 00:29:31.327 "seek_hole": false, 00:29:31.327 "seek_data": false, 00:29:31.327 "copy": true, 00:29:31.327 "nvme_iov_md": false 00:29:31.327 }, 00:29:31.327 "memory_domains": [ 00:29:31.327 { 00:29:31.327 "dma_device_id": "system", 00:29:31.327 "dma_device_type": 1 00:29:31.327 } 00:29:31.327 ], 00:29:31.327 "driver_specific": { 00:29:31.327 "nvme": [ 00:29:31.327 { 00:29:31.327 "trid": { 00:29:31.327 "trtype": "TCP", 00:29:31.327 "adrfam": "IPv4", 00:29:31.327 "traddr": "10.0.0.2", 00:29:31.327 "trsvcid": "4420", 00:29:31.327 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:31.327 }, 00:29:31.327 "ctrlr_data": { 00:29:31.327 "cntlid": 1, 00:29:31.327 "vendor_id": "0x8086", 00:29:31.327 "model_number": "SPDK bdev Controller", 00:29:31.327 "serial_number": "SPDK0", 00:29:31.327 "firmware_revision": "25.01", 00:29:31.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.327 "oacs": { 00:29:31.327 "security": 0, 00:29:31.327 "format": 0, 00:29:31.327 "firmware": 0, 00:29:31.327 "ns_manage": 0 00:29:31.327 }, 00:29:31.327 "multi_ctrlr": true, 00:29:31.327 "ana_reporting": false 00:29:31.327 }, 00:29:31.327 "vs": { 00:29:31.327 "nvme_version": "1.3" 00:29:31.327 }, 00:29:31.327 "ns_data": { 00:29:31.327 "id": 1, 00:29:31.327 "can_share": true 00:29:31.327 } 00:29:31.327 } 00:29:31.327 ], 00:29:31.327 "mp_policy": "active_passive" 00:29:31.327 } 00:29:31.327 } 00:29:31.327 ] 00:29:31.327 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2930262 00:29:31.327 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:31.327 16:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:31.327 Running I/O for 10 seconds... 00:29:32.264 Latency(us) 00:29:32.264 [2024-11-20T15:22:33.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.264 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:32.264 [2024-11-20T15:22:33.101Z] =================================================================================================================== 00:29:32.264 [2024-11-20T15:22:33.101Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:32.264 00:29:33.201 16:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:33.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.460 Nvme0n1 : 2.00 22511.00 87.93 0.00 0.00 0.00 0.00 0.00 00:29:33.460 [2024-11-20T15:22:34.297Z] =================================================================================================================== 00:29:33.460 [2024-11-20T15:22:34.297Z] Total : 22511.00 87.93 0.00 0.00 0.00 0.00 0.00 00:29:33.460 00:29:33.460 true 00:29:33.460 16:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:33.460 16:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:33.719 16:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:33.719 16:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:33.719 16:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2930262 00:29:34.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.288 Nvme0n1 : 3.00 22627.33 88.39 0.00 0.00 0.00 0.00 0.00 00:29:34.288 [2024-11-20T15:22:35.125Z] =================================================================================================================== 00:29:34.288 [2024-11-20T15:22:35.125Z] Total : 22627.33 88.39 0.00 0.00 0.00 0.00 0.00 00:29:34.288 00:29:35.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.667 Nvme0n1 : 4.00 22689.75 88.63 0.00 0.00 0.00 0.00 0.00 00:29:35.667 [2024-11-20T15:22:36.504Z] =================================================================================================================== 00:29:35.667 [2024-11-20T15:22:36.504Z] Total : 22689.75 88.63 0.00 0.00 0.00 0.00 0.00 00:29:35.667 00:29:36.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.605 Nvme0n1 : 5.00 22774.60 88.96 0.00 0.00 0.00 0.00 0.00 00:29:36.605 [2024-11-20T15:22:37.442Z] =================================================================================================================== 00:29:36.605 [2024-11-20T15:22:37.442Z] Total : 22774.60 88.96 0.00 0.00 0.00 0.00 0.00 00:29:36.605 00:29:37.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.574 Nvme0n1 : 6.00 22810.00 89.10 0.00 0.00 0.00 0.00 0.00 00:29:37.574 [2024-11-20T15:22:38.411Z] =================================================================================================================== 00:29:37.574 [2024-11-20T15:22:38.411Z] Total : 22810.00 89.10 0.00 0.00 0.00 0.00 0.00 00:29:37.574 00:29:38.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.512 Nvme0n1 : 7.00 22835.29 89.20 0.00 0.00 0.00 0.00 0.00 00:29:38.512 [2024-11-20T15:22:39.349Z] =================================================================================================================== 00:29:38.512 [2024-11-20T15:22:39.349Z] Total : 22835.29 89.20 0.00 0.00 0.00 0.00 0.00 00:29:38.512 00:29:39.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.449 Nvme0n1 : 8.00 22838.38 89.21 0.00 0.00 0.00 0.00 0.00 00:29:39.449 [2024-11-20T15:22:40.286Z] =================================================================================================================== 00:29:39.449 [2024-11-20T15:22:40.286Z] Total : 22838.38 89.21 0.00 0.00 0.00 0.00 0.00 00:29:39.449 00:29:40.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.471 Nvme0n1 : 9.00 22812.56 89.11 0.00 0.00 0.00 0.00 0.00 00:29:40.471 [2024-11-20T15:22:41.308Z] =================================================================================================================== 00:29:40.471 [2024-11-20T15:22:41.308Z] Total : 22812.56 89.11 0.00 0.00 0.00 0.00 0.00 00:29:40.471 00:29:41.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.502 Nvme0n1 : 10.00 22830.00 89.18 0.00 0.00 0.00 0.00 0.00 00:29:41.502 [2024-11-20T15:22:42.339Z] =================================================================================================================== 00:29:41.502 [2024-11-20T15:22:42.339Z] Total : 22830.00 89.18 0.00 0.00 0.00 0.00 0.00 00:29:41.502 00:29:41.502 00:29:41.502 Latency(us) 00:29:41.502 [2024-11-20T15:22:42.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.502 Nvme0n1 : 10.01 22830.22 89.18 0.00 0.00 5603.57 3305.29 26214.40 00:29:41.502 [2024-11-20T15:22:42.339Z] =================================================================================================================== 00:29:41.502 [2024-11-20T15:22:42.339Z] Total : 22830.22 89.18 0.00 0.00 5603.57 3305.29 26214.40 00:29:41.502 { 00:29:41.502 "results": [ 00:29:41.502 { 00:29:41.502 "job": "Nvme0n1", 00:29:41.502 "core_mask": "0x2", 00:29:41.502 "workload": "randwrite", 00:29:41.502 "status": "finished", 00:29:41.502 "queue_depth": 128, 00:29:41.502 "io_size": 4096, 00:29:41.502 "runtime": 10.005512, 00:29:41.502 "iops": 22830.215984949096, 00:29:41.502 "mibps": 89.18053119120741, 00:29:41.502 "io_failed": 0, 00:29:41.502 "io_timeout": 0, 00:29:41.502 "avg_latency_us": 5603.571216381758, 00:29:41.502 "min_latency_us": 3305.2939130434784, 00:29:41.502 "max_latency_us": 26214.4 00:29:41.502 } 00:29:41.502 ], 00:29:41.502 "core_count": 1 00:29:41.502 } 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2930036 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2930036 ']' 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2930036 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930036 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930036' 00:29:41.502 killing process with pid 2930036 00:29:41.502 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2930036 00:29:41.502 Received shutdown signal, test time was about 10.000000 seconds 00:29:41.502 00:29:41.502 Latency(us) 00:29:41.502 [2024-11-20T15:22:42.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.502 [2024-11-20T15:22:42.339Z] =================================================================================================================== 00:29:41.502 [2024-11-20T15:22:42.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.503 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2930036 00:29:41.761 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.761 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:42.020 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:42.020 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:42.279 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:42.279 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:42.279 16:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:42.539 [2024-11-20 16:22:43.130294] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:42.539 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:42.539 request: 00:29:42.539 { 00:29:42.539 "uuid": "0f9db160-2f18-44fb-8b2b-3c20f11431f9", 00:29:42.539 "method": "bdev_lvol_get_lvstores", 00:29:42.539 "req_id": 1 00:29:42.539 } 00:29:42.539 Got JSON-RPC error response 00:29:42.539 response: 00:29:42.539 { 00:29:42.539 "code": -19, 00:29:42.539 "message": "No such device" 00:29:42.539 } 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:42.798 aio_bdev 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 92f559aa-9bf7-48cb-83c3-b693ec05e779 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=92f559aa-9bf7-48cb-83c3-b693ec05e779 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:42.798 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:43.058 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92f559aa-9bf7-48cb-83c3-b693ec05e779 -t 2000 00:29:43.317 [ 00:29:43.317 { 00:29:43.317 "name": "92f559aa-9bf7-48cb-83c3-b693ec05e779", 00:29:43.317 "aliases": [ 00:29:43.317 "lvs/lvol" 00:29:43.317 ], 00:29:43.317 "product_name": "Logical Volume", 00:29:43.317 "block_size": 4096, 00:29:43.317 "num_blocks": 38912, 00:29:43.317 "uuid": "92f559aa-9bf7-48cb-83c3-b693ec05e779", 00:29:43.317 "assigned_rate_limits": { 00:29:43.317 "rw_ios_per_sec": 0, 00:29:43.317 "rw_mbytes_per_sec": 0, 00:29:43.317 "r_mbytes_per_sec": 0, 00:29:43.317 "w_mbytes_per_sec": 0 00:29:43.317 }, 00:29:43.317 "claimed": false, 00:29:43.317 "zoned": false, 00:29:43.317 "supported_io_types": { 00:29:43.317 "read": true, 00:29:43.317 "write": true, 00:29:43.317 "unmap": true, 00:29:43.317 "flush": false, 00:29:43.317 "reset": true, 00:29:43.317 "nvme_admin": false, 00:29:43.317 "nvme_io": false, 00:29:43.317 "nvme_io_md": false, 00:29:43.317 "write_zeroes": true, 00:29:43.317 "zcopy": false, 00:29:43.317 "get_zone_info": false, 00:29:43.317 "zone_management": false, 00:29:43.317 "zone_append": false, 00:29:43.317 "compare": false, 00:29:43.317 "compare_and_write": false, 00:29:43.317 "abort": false, 00:29:43.317 "seek_hole": true, 00:29:43.317 "seek_data": true, 00:29:43.317 "copy": false, 00:29:43.317 "nvme_iov_md": false 00:29:43.317 }, 00:29:43.317 "driver_specific": { 00:29:43.317 "lvol": { 00:29:43.317 "lvol_store_uuid": "0f9db160-2f18-44fb-8b2b-3c20f11431f9", 00:29:43.317 "base_bdev": "aio_bdev", 00:29:43.317 "thin_provision": false, 00:29:43.317 "num_allocated_clusters": 38, 00:29:43.317 "snapshot": false, 00:29:43.317 "clone": false, 00:29:43.317 "esnap_clone": false 00:29:43.317 } 00:29:43.317 } 00:29:43.317 } 00:29:43.317 ] 00:29:43.317 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:43.317 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:43.317 16:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:43.577 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:43.577 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:43.577 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:43.577 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:43.577 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 92f559aa-9bf7-48cb-83c3-b693ec05e779 00:29:43.836 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f9db160-2f18-44fb-8b2b-3c20f11431f9 00:29:44.095 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:44.355 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:44.355 00:29:44.355 real 0m15.833s 00:29:44.355 user 0m15.386s 00:29:44.355 sys 0m1.491s 00:29:44.355 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.355 16:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:44.355 ************************************ 00:29:44.355 END TEST lvs_grow_clean 00:29:44.355 ************************************ 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:44.355 ************************************ 00:29:44.355 START TEST lvs_grow_dirty 00:29:44.355 ************************************ 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:44.355 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:44.614 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:44.614 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:44.873 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:44.873 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:44.873 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:44.873 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:44.873 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:44.873 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae lvol 150 00:29:45.132 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bc392b8d-7d48-4299-be78-11a0dbcab577 00:29:45.132 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:45.132 16:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:45.391 [2024-11-20 16:22:46.070198] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:45.391 [2024-11-20 16:22:46.070328] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:45.391 true 00:29:45.391 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:45.391 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:45.650 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:45.650 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:45.909 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bc392b8d-7d48-4299-be78-11a0dbcab577 00:29:45.909 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.168 [2024-11-20 16:22:46.850621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.168 16:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2932626 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2932626 /var/tmp/bdevperf.sock 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2932626 ']' 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:46.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.427 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:46.427 [2024-11-20 16:22:47.089463] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:46.427 [2024-11-20 16:22:47.089508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932626 ] 00:29:46.427 [2024-11-20 16:22:47.166319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.427 [2024-11-20 16:22:47.209324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.686 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.686 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:46.686 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:46.946 Nvme0n1 00:29:46.946 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:46.946 [ 00:29:46.946 { 00:29:46.946 "name": "Nvme0n1", 00:29:46.946 "aliases": [ 00:29:46.946 "bc392b8d-7d48-4299-be78-11a0dbcab577" 00:29:46.946 ], 00:29:46.946 "product_name": "NVMe disk", 00:29:46.946 "block_size": 4096, 00:29:46.946 "num_blocks": 38912, 00:29:46.946 "uuid": "bc392b8d-7d48-4299-be78-11a0dbcab577", 00:29:46.946 "numa_id": 1, 00:29:46.946 "assigned_rate_limits": { 00:29:46.946 "rw_ios_per_sec": 0, 00:29:46.946 "rw_mbytes_per_sec": 0, 00:29:46.946 "r_mbytes_per_sec": 0, 00:29:46.946 "w_mbytes_per_sec": 0 00:29:46.946 }, 00:29:46.946 "claimed": false, 00:29:46.946 "zoned": false, 00:29:46.946 "supported_io_types": { 00:29:46.946 "read": true, 00:29:46.946 "write": true, 00:29:46.946 "unmap": true, 00:29:46.946 "flush": true, 00:29:46.946 "reset": true, 00:29:46.946 "nvme_admin": true, 00:29:46.946 "nvme_io": true, 00:29:46.946 "nvme_io_md": false, 00:29:46.946 "write_zeroes": true, 00:29:46.946 "zcopy": false, 00:29:46.946 "get_zone_info": false, 00:29:46.946 "zone_management": false, 00:29:46.946 "zone_append": false, 00:29:46.946 "compare": true, 00:29:46.946 "compare_and_write": true, 00:29:46.946 "abort": true, 00:29:46.946 "seek_hole": false, 00:29:46.946 "seek_data": false, 00:29:46.946 "copy": true, 00:29:46.946 "nvme_iov_md": false 00:29:46.946 }, 00:29:46.946 "memory_domains": [ 00:29:46.946 { 00:29:46.946 "dma_device_id": "system", 00:29:46.946 "dma_device_type": 1 00:29:46.946 } 00:29:46.946 ], 00:29:46.946 "driver_specific": { 00:29:46.946 "nvme": [ 00:29:46.946 { 00:29:46.946 "trid": { 00:29:46.946 "trtype": "TCP", 00:29:46.946 "adrfam": "IPv4", 00:29:46.946 "traddr": "10.0.0.2", 00:29:46.946 "trsvcid": "4420", 00:29:46.946 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:46.946 }, 00:29:46.946 "ctrlr_data": { 00:29:46.946 "cntlid": 1, 00:29:46.946 "vendor_id": "0x8086", 00:29:46.946 "model_number": "SPDK bdev Controller", 00:29:46.946 "serial_number": "SPDK0", 00:29:46.946 "firmware_revision": "25.01", 00:29:46.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.946 "oacs": { 00:29:46.946 "security": 0, 00:29:46.946 "format": 0, 00:29:46.946 "firmware": 0, 00:29:46.946 "ns_manage": 0 00:29:46.946 }, 00:29:46.946 "multi_ctrlr": true, 00:29:46.946 "ana_reporting": false 00:29:46.946 }, 00:29:46.946 "vs": { 00:29:46.946 "nvme_version": "1.3" 00:29:46.946 }, 00:29:46.946 "ns_data": { 00:29:46.946 "id": 1, 00:29:46.946 "can_share": true 00:29:46.946 } 00:29:46.946 } 00:29:46.946 ], 00:29:46.946 "mp_policy": "active_passive" 00:29:46.946 } 00:29:46.946 } 00:29:46.946 ] 00:29:46.946 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2932846 00:29:46.946 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:46.946 16:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:47.206 Running I/O for 10 seconds... 00:29:48.152 Latency(us) 00:29:48.152 [2024-11-20T15:22:48.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.153 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:48.153 [2024-11-20T15:22:48.990Z] =================================================================================================================== 00:29:48.153 [2024-11-20T15:22:48.990Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:48.153 00:29:49.094 16:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:49.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.094 Nvme0n1 : 2.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:49.094 [2024-11-20T15:22:49.931Z] =================================================================================================================== 00:29:49.094 [2024-11-20T15:22:49.931Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:49.094 00:29:49.352 true 00:29:49.352 16:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:49.352 16:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:49.611 16:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:49.611 16:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:49.611 16:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2932846 00:29:50.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.178 Nvme0n1 : 3.00 22648.33 88.47 0.00 0.00 0.00 0.00 0.00 00:29:50.178 [2024-11-20T15:22:51.015Z] =================================================================================================================== 00:29:50.178 [2024-11-20T15:22:51.015Z] Total : 22648.33 88.47 0.00 0.00 0.00 0.00 0.00 00:29:50.178 00:29:51.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.114 Nvme0n1 : 4.00 22764.75 88.92 0.00 0.00 0.00 0.00 0.00 00:29:51.114 [2024-11-20T15:22:51.951Z] =================================================================================================================== 00:29:51.114 [2024-11-20T15:22:51.951Z] Total : 22764.75 88.92 0.00 0.00 0.00 0.00 0.00 00:29:51.114 00:29:52.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.053 Nvme0n1 : 5.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:29:52.053 [2024-11-20T15:22:52.890Z] =================================================================================================================== 00:29:52.053 [2024-11-20T15:22:52.890Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:29:52.053 00:29:53.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.431 Nvme0n1 : 6.00 22908.00 89.48 0.00 0.00 0.00 0.00 0.00 00:29:53.431 [2024-11-20T15:22:54.268Z] =================================================================================================================== 00:29:53.431 [2024-11-20T15:22:54.268Z] Total : 22908.00 89.48 0.00 0.00 0.00 0.00 0.00 00:29:53.431 00:29:54.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.368 Nvme0n1 : 7.00 22955.57 89.67 0.00 0.00 0.00 0.00 0.00 00:29:54.368 [2024-11-20T15:22:55.205Z] =================================================================================================================== 00:29:54.368 [2024-11-20T15:22:55.205Z] Total : 22955.57 89.67 0.00 0.00 0.00 0.00 0.00 00:29:54.368 00:29:55.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.305 Nvme0n1 : 8.00 22991.25 89.81 0.00 0.00 0.00 0.00 0.00 00:29:55.305 [2024-11-20T15:22:56.142Z] =================================================================================================================== 00:29:55.305 [2024-11-20T15:22:56.142Z] Total : 22991.25 89.81 0.00 0.00 0.00 0.00 0.00 00:29:55.305 00:29:56.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.243 Nvme0n1 : 9.00 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:29:56.243 [2024-11-20T15:22:57.080Z] =================================================================================================================== 00:29:56.243 [2024-11-20T15:22:57.080Z] Total : 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:29:56.243 00:29:57.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.181 Nvme0n1 : 10.00 23041.20 90.00 0.00 0.00 0.00 0.00 0.00 00:29:57.181 [2024-11-20T15:22:58.018Z] =================================================================================================================== 00:29:57.181 [2024-11-20T15:22:58.018Z] Total : 23041.20 90.00 0.00 0.00 0.00 0.00 0.00 00:29:57.181 00:29:57.181 00:29:57.181 Latency(us) 00:29:57.181 [2024-11-20T15:22:58.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.181 Nvme0n1 : 10.00 23046.19 90.02 0.00 0.00 5551.22 3305.29 24960.67 00:29:57.181 [2024-11-20T15:22:58.018Z] =================================================================================================================== 00:29:57.181 [2024-11-20T15:22:58.018Z] Total : 23046.19 90.02 0.00 0.00 5551.22 3305.29 24960.67 00:29:57.181 { 00:29:57.181 "results": [ 00:29:57.181 { 00:29:57.181 "job": "Nvme0n1", 00:29:57.181 "core_mask": "0x2", 00:29:57.181 "workload": "randwrite", 00:29:57.181 "status": "finished", 00:29:57.181 "queue_depth": 128, 00:29:57.181 "io_size": 4096, 00:29:57.181 "runtime": 10.003389, 00:29:57.181 "iops": 23046.189646328858, 00:29:57.181 "mibps": 90.0241783059721, 00:29:57.181 "io_failed": 0, 00:29:57.181 "io_timeout": 0, 00:29:57.181 "avg_latency_us": 5551.216391609869, 00:29:57.181 "min_latency_us": 3305.2939130434784, 00:29:57.181 "max_latency_us": 24960.667826086956 00:29:57.181 } 00:29:57.181 ], 00:29:57.181 "core_count": 1 00:29:57.181 } 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2932626 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2932626 ']' 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2932626 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2932626 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2932626' 00:29:57.181 killing process with pid 2932626 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2932626 00:29:57.181 Received shutdown signal, test time was about 10.000000 seconds 00:29:57.181 00:29:57.181 Latency(us) 00:29:57.181 [2024-11-20T15:22:58.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.181 [2024-11-20T15:22:58.018Z] =================================================================================================================== 00:29:57.181 [2024-11-20T15:22:58.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.181 16:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2932626 00:29:57.440 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.699 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2929551 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2929551 00:29:57.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2929551 Killed "${NVMF_APP[@]}" "$@" 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2934470 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2934470 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2934470 ']' 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.958 16:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:58.218 [2024-11-20 16:22:58.820431] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:58.218 [2024-11-20 16:22:58.821423] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:58.218 [2024-11-20 16:22:58.821460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.218 [2024-11-20 16:22:58.902851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.218 [2024-11-20 16:22:58.944486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.218 [2024-11-20 16:22:58.944524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.218 [2024-11-20 16:22:58.944531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.218 [2024-11-20 16:22:58.944537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.218 [2024-11-20 16:22:58.944543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.218 [2024-11-20 16:22:58.945135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.218 [2024-11-20 16:22:59.014611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.218 [2024-11-20 16:22:59.014844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.218 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.218 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:58.218 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.218 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.218 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:58.477 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.477 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:58.477 [2024-11-20 16:22:59.258623] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:58.477 [2024-11-20 16:22:59.258818] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:58.477 [2024-11-20 16:22:59.258898] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bc392b8d-7d48-4299-be78-11a0dbcab577 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bc392b8d-7d48-4299-be78-11a0dbcab577 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:58.478 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:58.736 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bc392b8d-7d48-4299-be78-11a0dbcab577 -t 2000 00:29:58.995 [ 00:29:58.995 { 00:29:58.995 "name": "bc392b8d-7d48-4299-be78-11a0dbcab577", 00:29:58.995 "aliases": [ 00:29:58.995 "lvs/lvol" 00:29:58.995 ], 00:29:58.995 "product_name": "Logical Volume", 00:29:58.995 "block_size": 4096, 00:29:58.995 "num_blocks": 38912, 00:29:58.995 "uuid": "bc392b8d-7d48-4299-be78-11a0dbcab577", 00:29:58.995 "assigned_rate_limits": { 00:29:58.995 "rw_ios_per_sec": 0, 00:29:58.995 "rw_mbytes_per_sec": 0, 00:29:58.995 "r_mbytes_per_sec": 0, 00:29:58.995 "w_mbytes_per_sec": 0 00:29:58.995 }, 00:29:58.995 "claimed": false, 00:29:58.995 "zoned": false, 00:29:58.995 "supported_io_types": { 00:29:58.995 "read": true, 00:29:58.995 "write": true, 00:29:58.995 "unmap": true, 00:29:58.995 "flush": false, 00:29:58.995 "reset": true, 00:29:58.995 "nvme_admin": false, 00:29:58.995 "nvme_io": false, 00:29:58.995 "nvme_io_md": false, 00:29:58.995 "write_zeroes": true, 00:29:58.995 "zcopy": false, 00:29:58.995 "get_zone_info": false, 00:29:58.995 "zone_management": false, 00:29:58.995 "zone_append": false, 00:29:58.995 "compare": false, 00:29:58.995 "compare_and_write": false, 00:29:58.995 "abort": false, 00:29:58.995 "seek_hole": true, 00:29:58.995 "seek_data": true, 00:29:58.995 "copy": false, 00:29:58.995 "nvme_iov_md": false 00:29:58.995 }, 00:29:58.995 "driver_specific": { 00:29:58.995 "lvol": { 00:29:58.995 "lvol_store_uuid": "1718a7be-f653-4b0b-af23-b8fccbfba3ae", 00:29:58.995 "base_bdev": "aio_bdev", 00:29:58.995 "thin_provision": false, 00:29:58.995 "num_allocated_clusters": 38, 00:29:58.995 "snapshot": false, 00:29:58.995 "clone": false, 00:29:58.995 "esnap_clone": false 00:29:58.995 } 00:29:58.995 } 00:29:58.995 } 00:29:58.995 ] 00:29:58.995 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:58.995 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:58.995 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:59.254 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:59.254 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:59.254 16:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:59.254 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:59.254 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:59.513 [2024-11-20 16:23:00.245584] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:59.513 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:29:59.772 request: 00:29:59.772 { 00:29:59.772 "uuid": "1718a7be-f653-4b0b-af23-b8fccbfba3ae", 00:29:59.772 "method": "bdev_lvol_get_lvstores", 00:29:59.772 "req_id": 1 00:29:59.772 } 00:29:59.772 Got JSON-RPC error response 00:29:59.772 response: 00:29:59.772 { 00:29:59.772 "code": -19, 00:29:59.772 "message": "No such device" 00:29:59.772 } 00:29:59.772 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:59.772 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:59.772 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:59.772 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:59.772 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:00.031 aio_bdev 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bc392b8d-7d48-4299-be78-11a0dbcab577 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bc392b8d-7d48-4299-be78-11a0dbcab577 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:00.031 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:00.291 16:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bc392b8d-7d48-4299-be78-11a0dbcab577 -t 2000 00:30:00.291 [ 00:30:00.291 { 00:30:00.291 "name": "bc392b8d-7d48-4299-be78-11a0dbcab577", 00:30:00.291 "aliases": [ 00:30:00.291 "lvs/lvol" 00:30:00.291 ], 00:30:00.291 "product_name": "Logical Volume", 00:30:00.291 "block_size": 4096, 00:30:00.291 "num_blocks": 38912, 00:30:00.291 "uuid": "bc392b8d-7d48-4299-be78-11a0dbcab577", 00:30:00.291 "assigned_rate_limits": { 00:30:00.291 "rw_ios_per_sec": 0, 00:30:00.291 "rw_mbytes_per_sec": 0, 00:30:00.291 "r_mbytes_per_sec": 0, 00:30:00.291 "w_mbytes_per_sec": 0 00:30:00.291 }, 00:30:00.291 "claimed": false, 00:30:00.291 "zoned": false, 00:30:00.291 "supported_io_types": { 00:30:00.291 "read": true, 00:30:00.291 "write": true, 00:30:00.291 "unmap": true, 00:30:00.291 "flush": false, 00:30:00.291 "reset": true, 00:30:00.291 "nvme_admin": false, 00:30:00.291 "nvme_io": false, 00:30:00.291 "nvme_io_md": false, 00:30:00.291 "write_zeroes": true, 00:30:00.291 "zcopy": false, 00:30:00.291 "get_zone_info": false, 00:30:00.291 "zone_management": false, 00:30:00.291 "zone_append": false, 00:30:00.291 "compare": false, 00:30:00.291 "compare_and_write": false, 00:30:00.291 "abort": false, 00:30:00.291 "seek_hole": true, 00:30:00.291 "seek_data": true, 00:30:00.291 "copy": false, 00:30:00.291 "nvme_iov_md": false 00:30:00.291 }, 00:30:00.291 "driver_specific": { 00:30:00.291 "lvol": { 00:30:00.291 "lvol_store_uuid": "1718a7be-f653-4b0b-af23-b8fccbfba3ae", 00:30:00.291 "base_bdev": "aio_bdev", 00:30:00.291 "thin_provision": false, 00:30:00.291 "num_allocated_clusters": 38, 00:30:00.291 "snapshot": false, 00:30:00.291 "clone": false, 00:30:00.291 "esnap_clone": false 00:30:00.291 } 00:30:00.291 } 00:30:00.291 } 00:30:00.291 ] 00:30:00.291 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:00.291 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:30:00.291 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:00.550 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:00.550 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:30:00.550 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:00.809 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:00.809 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bc392b8d-7d48-4299-be78-11a0dbcab577 00:30:01.068 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1718a7be-f653-4b0b-af23-b8fccbfba3ae 00:30:01.068 16:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:01.328 00:30:01.328 real 0m17.053s 00:30:01.328 user 0m34.607s 00:30:01.328 sys 0m3.718s 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:01.328 ************************************ 00:30:01.328 END TEST lvs_grow_dirty 00:30:01.328 ************************************ 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:01.328 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:01.587 nvmf_trace.0 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:01.587 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.588 rmmod nvme_tcp 00:30:01.588 rmmod nvme_fabrics 00:30:01.588 rmmod nvme_keyring 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2934470 ']' 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2934470 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2934470 ']' 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2934470 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2934470 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2934470' 00:30:01.588 killing process with pid 2934470 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2934470 00:30:01.588 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2934470 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.847 16:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.755 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.755 00:30:03.755 real 0m42.030s 00:30:03.755 user 0m52.522s 00:30:03.755 sys 0m10.055s 00:30:03.755 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.755 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.755 ************************************ 00:30:03.755 END TEST nvmf_lvs_grow 00:30:03.755 ************************************ 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:04.015 ************************************ 00:30:04.015 START TEST nvmf_bdev_io_wait 00:30:04.015 ************************************ 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:04.015 * Looking for test storage... 00:30:04.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.015 --rc genhtml_branch_coverage=1 00:30:04.015 --rc genhtml_function_coverage=1 00:30:04.015 --rc genhtml_legend=1 00:30:04.015 --rc geninfo_all_blocks=1 00:30:04.015 --rc geninfo_unexecuted_blocks=1 00:30:04.015 00:30:04.015 ' 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.015 --rc genhtml_branch_coverage=1 00:30:04.015 --rc genhtml_function_coverage=1 00:30:04.015 --rc genhtml_legend=1 00:30:04.015 --rc geninfo_all_blocks=1 00:30:04.015 --rc geninfo_unexecuted_blocks=1 00:30:04.015 00:30:04.015 ' 00:30:04.015 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.015 --rc genhtml_branch_coverage=1 00:30:04.015 --rc genhtml_function_coverage=1 00:30:04.016 --rc genhtml_legend=1 00:30:04.016 --rc geninfo_all_blocks=1 00:30:04.016 --rc geninfo_unexecuted_blocks=1 00:30:04.016 00:30:04.016 ' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.016 --rc genhtml_branch_coverage=1 00:30:04.016 --rc genhtml_function_coverage=1 00:30:04.016 --rc genhtml_legend=1 00:30:04.016 --rc geninfo_all_blocks=1 00:30:04.016 --rc geninfo_unexecuted_blocks=1 00:30:04.016 00:30:04.016 ' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.016 16:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:10.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:10.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:10.589 Found net devices under 0000:86:00.0: cvl_0_0 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:10.589 Found net devices under 0000:86:00.1: cvl_0_1 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.589 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:10.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:30:10.590 00:30:10.590 --- 10.0.0.2 ping statistics --- 00:30:10.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.590 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:10.590 00:30:10.590 --- 10.0.0.1 ping statistics --- 00:30:10.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.590 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2938597 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2938597 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2938597 ']' 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.590 16:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.590 [2024-11-20 16:23:10.804151] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:10.590 [2024-11-20 16:23:10.805139] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:10.590 [2024-11-20 16:23:10.805180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.590 [2024-11-20 16:23:10.889436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.590 [2024-11-20 16:23:10.935786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.590 [2024-11-20 16:23:10.935824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.590 [2024-11-20 16:23:10.935831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.590 [2024-11-20 16:23:10.935837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.590 [2024-11-20 16:23:10.935845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.590 [2024-11-20 16:23:10.937293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.590 [2024-11-20 16:23:10.937322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.590 [2024-11-20 16:23:10.937427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.590 [2024-11-20 16:23:10.937428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.590 [2024-11-20 16:23:10.937880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:10.849 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.849 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:10.849 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:10.849 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:10.849 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 [2024-11-20 16:23:11.759173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.110 [2024-11-20 16:23:11.759777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:11.110 [2024-11-20 16:23:11.759785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:11.110 [2024-11-20 16:23:11.759943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 [2024-11-20 16:23:11.770268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 Malloc0 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.110 [2024-11-20 16:23:11.838371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2938764 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2938766 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.110 { 00:30:11.110 "params": { 00:30:11.110 "name": "Nvme$subsystem", 00:30:11.110 "trtype": "$TEST_TRANSPORT", 00:30:11.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.110 "adrfam": "ipv4", 00:30:11.110 "trsvcid": "$NVMF_PORT", 00:30:11.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.110 "hdgst": ${hdgst:-false}, 00:30:11.110 "ddgst": ${ddgst:-false} 00:30:11.110 }, 00:30:11.110 "method": "bdev_nvme_attach_controller" 00:30:11.110 } 00:30:11.110 EOF 00:30:11.110 )") 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2938768 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2938771 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:11.110 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.110 { 00:30:11.110 "params": { 00:30:11.110 "name": "Nvme$subsystem", 00:30:11.110 "trtype": "$TEST_TRANSPORT", 00:30:11.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.110 "adrfam": "ipv4", 00:30:11.110 "trsvcid": "$NVMF_PORT", 00:30:11.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.110 "hdgst": ${hdgst:-false}, 00:30:11.111 "ddgst": ${ddgst:-false} 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 } 00:30:11.111 EOF 00:30:11.111 )") 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.111 { 00:30:11.111 "params": { 00:30:11.111 "name": "Nvme$subsystem", 00:30:11.111 "trtype": "$TEST_TRANSPORT", 00:30:11.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.111 "adrfam": "ipv4", 00:30:11.111 "trsvcid": "$NVMF_PORT", 00:30:11.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.111 "hdgst": ${hdgst:-false}, 00:30:11.111 "ddgst": ${ddgst:-false} 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 } 00:30:11.111 EOF 00:30:11.111 )") 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.111 { 00:30:11.111 "params": { 00:30:11.111 "name": "Nvme$subsystem", 00:30:11.111 "trtype": "$TEST_TRANSPORT", 00:30:11.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.111 "adrfam": "ipv4", 00:30:11.111 "trsvcid": "$NVMF_PORT", 00:30:11.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.111 "hdgst": ${hdgst:-false}, 00:30:11.111 "ddgst": ${ddgst:-false} 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 } 00:30:11.111 EOF 00:30:11.111 )") 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2938764 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.111 "params": { 00:30:11.111 "name": "Nvme1", 00:30:11.111 "trtype": "tcp", 00:30:11.111 "traddr": "10.0.0.2", 00:30:11.111 "adrfam": "ipv4", 00:30:11.111 "trsvcid": "4420", 00:30:11.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.111 "hdgst": false, 00:30:11.111 "ddgst": false 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 }' 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.111 "params": { 00:30:11.111 "name": "Nvme1", 00:30:11.111 "trtype": "tcp", 00:30:11.111 "traddr": "10.0.0.2", 00:30:11.111 "adrfam": "ipv4", 00:30:11.111 "trsvcid": "4420", 00:30:11.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.111 "hdgst": false, 00:30:11.111 "ddgst": false 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 }' 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.111 "params": { 00:30:11.111 "name": "Nvme1", 00:30:11.111 "trtype": "tcp", 00:30:11.111 "traddr": "10.0.0.2", 00:30:11.111 "adrfam": "ipv4", 00:30:11.111 "trsvcid": "4420", 00:30:11.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.111 "hdgst": false, 00:30:11.111 "ddgst": false 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 }' 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:11.111 16:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.111 "params": { 00:30:11.111 "name": "Nvme1", 00:30:11.111 "trtype": "tcp", 00:30:11.111 "traddr": "10.0.0.2", 00:30:11.111 "adrfam": "ipv4", 00:30:11.111 "trsvcid": "4420", 00:30:11.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.111 "hdgst": false, 00:30:11.111 "ddgst": false 00:30:11.111 }, 00:30:11.111 "method": "bdev_nvme_attach_controller" 00:30:11.111 }' 00:30:11.111 [2024-11-20 16:23:11.889587] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:11.111 [2024-11-20 16:23:11.889632] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:11.111 [2024-11-20 16:23:11.891016] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:11.111 [2024-11-20 16:23:11.891070] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:11.111 [2024-11-20 16:23:11.892916] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:11.111 [2024-11-20 16:23:11.892965] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:11.111 [2024-11-20 16:23:11.896307] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:11.111 [2024-11-20 16:23:11.896347] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:11.370 [2024-11-20 16:23:12.088154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.370 [2024-11-20 16:23:12.131169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.370 [2024-11-20 16:23:12.184981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.628 [2024-11-20 16:23:12.238831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:11.628 [2024-11-20 16:23:12.239611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.628 [2024-11-20 16:23:12.283286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:11.628 [2024-11-20 16:23:12.297170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.628 [2024-11-20 16:23:12.340132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:11.628 Running I/O for 1 seconds... 00:30:11.628 Running I/O for 1 seconds... 00:30:11.628 Running I/O for 1 seconds... 00:30:11.885 Running I/O for 1 seconds... 00:30:12.818 7470.00 IOPS, 29.18 MiB/s 00:30:12.818 Latency(us) 00:30:12.818 [2024-11-20T15:23:13.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.818 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:12.818 Nvme1n1 : 1.02 7480.95 29.22 0.00 0.00 16986.29 1503.05 23023.08 00:30:12.818 [2024-11-20T15:23:13.655Z] =================================================================================================================== 00:30:12.818 [2024-11-20T15:23:13.655Z] Total : 7480.95 29.22 0.00 0.00 16986.29 1503.05 23023.08 00:30:12.818 7142.00 IOPS, 27.90 MiB/s [2024-11-20T15:23:13.655Z] 237832.00 IOPS, 929.03 MiB/s 00:30:12.818 Latency(us) 00:30:12.818 [2024-11-20T15:23:13.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.818 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:12.818 Nvme1n1 : 1.00 237445.60 927.52 0.00 0.00 535.76 229.73 1624.15 00:30:12.818 [2024-11-20T15:23:13.655Z] =================================================================================================================== 00:30:12.818 [2024-11-20T15:23:13.655Z] Total : 237445.60 927.52 0.00 0.00 535.76 229.73 1624.15 00:30:12.818 00:30:12.818 Latency(us) 00:30:12.818 [2024-11-20T15:23:13.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.818 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:12.818 Nvme1n1 : 1.05 6951.72 27.16 0.00 0.00 17662.73 4843.97 45362.31 00:30:12.818 [2024-11-20T15:23:13.655Z] =================================================================================================================== 00:30:12.818 [2024-11-20T15:23:13.655Z] Total : 6951.72 27.16 0.00 0.00 17662.73 4843.97 45362.31 00:30:12.818 12865.00 IOPS, 50.25 MiB/s 00:30:12.818 Latency(us) 00:30:12.818 [2024-11-20T15:23:13.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.818 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:12.818 Nvme1n1 : 1.00 12950.36 50.59 0.00 0.00 9862.52 1980.33 14531.90 00:30:12.818 [2024-11-20T15:23:13.655Z] =================================================================================================================== 00:30:12.818 [2024-11-20T15:23:13.655Z] Total : 12950.36 50.59 0.00 0.00 9862.52 1980.33 14531.90 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2938766 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2938768 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2938771 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.818 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.077 rmmod nvme_tcp 00:30:13.077 rmmod nvme_fabrics 00:30:13.077 rmmod nvme_keyring 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2938597 ']' 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2938597 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2938597 ']' 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2938597 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938597 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938597' 00:30:13.077 killing process with pid 2938597 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2938597 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2938597 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.077 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.078 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:13.078 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:13.078 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.078 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.337 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.337 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.337 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.337 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.337 16:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.244 16:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.244 00:30:15.244 real 0m11.353s 00:30:15.244 user 0m14.678s 00:30:15.244 sys 0m6.541s 00:30:15.244 16:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.244 16:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.244 ************************************ 00:30:15.244 END TEST nvmf_bdev_io_wait 00:30:15.244 ************************************ 00:30:15.244 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:15.244 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:15.244 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.244 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:15.244 ************************************ 00:30:15.244 START TEST nvmf_queue_depth 00:30:15.244 ************************************ 00:30:15.244 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:15.504 * Looking for test storage... 00:30:15.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:15.504 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.505 --rc genhtml_branch_coverage=1 00:30:15.505 --rc genhtml_function_coverage=1 00:30:15.505 --rc genhtml_legend=1 00:30:15.505 --rc geninfo_all_blocks=1 00:30:15.505 --rc geninfo_unexecuted_blocks=1 00:30:15.505 00:30:15.505 ' 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.505 --rc genhtml_branch_coverage=1 00:30:15.505 --rc genhtml_function_coverage=1 00:30:15.505 --rc genhtml_legend=1 00:30:15.505 --rc geninfo_all_blocks=1 00:30:15.505 --rc geninfo_unexecuted_blocks=1 00:30:15.505 00:30:15.505 ' 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.505 --rc genhtml_branch_coverage=1 00:30:15.505 --rc genhtml_function_coverage=1 00:30:15.505 --rc genhtml_legend=1 00:30:15.505 --rc geninfo_all_blocks=1 00:30:15.505 --rc geninfo_unexecuted_blocks=1 00:30:15.505 00:30:15.505 ' 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.505 --rc genhtml_branch_coverage=1 00:30:15.505 --rc genhtml_function_coverage=1 00:30:15.505 --rc genhtml_legend=1 00:30:15.505 --rc geninfo_all_blocks=1 00:30:15.505 --rc geninfo_unexecuted_blocks=1 00:30:15.505 00:30:15.505 ' 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:15.505 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.506 16:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:22.074 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:22.074 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.074 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:22.075 Found net devices under 0000:86:00.0: cvl_0_0 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:22.075 Found net devices under 0000:86:00.1: cvl_0_1 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.075 16:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:30:22.075 00:30:22.075 --- 10.0.0.2 ping statistics --- 00:30:22.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.075 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:22.075 00:30:22.075 --- 10.0.0.1 ping statistics --- 00:30:22.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.075 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2942549 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2942549 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2942549 ']' 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.075 [2024-11-20 16:23:22.139482] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.075 [2024-11-20 16:23:22.140420] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:22.075 [2024-11-20 16:23:22.140456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.075 [2024-11-20 16:23:22.223332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.075 [2024-11-20 16:23:22.264345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.075 [2024-11-20 16:23:22.264380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.075 [2024-11-20 16:23:22.264387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.075 [2024-11-20 16:23:22.264393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.075 [2024-11-20 16:23:22.264398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.075 [2024-11-20 16:23:22.264939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.075 [2024-11-20 16:23:22.332261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.075 [2024-11-20 16:23:22.332487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.075 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 [2024-11-20 16:23:22.397592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 Malloc0 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 [2024-11-20 16:23:22.473576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2942664 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2942664 /var/tmp/bdevperf.sock 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2942664 ']' 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:22.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.076 [2024-11-20 16:23:22.524167] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:22.076 [2024-11-20 16:23:22.524211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942664 ] 00:30:22.076 [2024-11-20 16:23:22.598589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.076 [2024-11-20 16:23:22.642382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.076 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.335 NVMe0n1 00:30:22.335 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.335 16:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:22.335 Running I/O for 10 seconds... 00:30:24.648 11264.00 IOPS, 44.00 MiB/s [2024-11-20T15:23:26.421Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-20T15:23:27.357Z] 11950.67 IOPS, 46.68 MiB/s [2024-11-20T15:23:28.309Z] 12038.50 IOPS, 47.03 MiB/s [2024-11-20T15:23:29.245Z] 12138.80 IOPS, 47.42 MiB/s [2024-11-20T15:23:30.179Z] 12227.83 IOPS, 47.76 MiB/s [2024-11-20T15:23:31.116Z] 12265.29 IOPS, 47.91 MiB/s [2024-11-20T15:23:32.162Z] 12233.00 IOPS, 47.79 MiB/s [2024-11-20T15:23:33.098Z] 12253.00 IOPS, 47.86 MiB/s [2024-11-20T15:23:33.357Z] 12267.80 IOPS, 47.92 MiB/s 00:30:32.520 Latency(us) 00:30:32.520 [2024-11-20T15:23:33.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:32.520 Verification LBA range: start 0x0 length 0x4000 00:30:32.520 NVMe0n1 : 10.07 12283.68 47.98 0.00 0.00 83076.48 19603.81 56531.92 00:30:32.520 [2024-11-20T15:23:33.357Z] =================================================================================================================== 00:30:32.520 [2024-11-20T15:23:33.357Z] Total : 12283.68 47.98 0.00 0.00 83076.48 19603.81 56531.92 00:30:32.520 { 00:30:32.520 "results": [ 00:30:32.520 { 00:30:32.520 "job": "NVMe0n1", 00:30:32.520 "core_mask": "0x1", 00:30:32.520 "workload": "verify", 00:30:32.520 "status": "finished", 00:30:32.520 "verify_range": { 00:30:32.520 "start": 0, 00:30:32.520 "length": 16384 00:30:32.520 }, 00:30:32.520 "queue_depth": 1024, 00:30:32.520 "io_size": 4096, 00:30:32.520 "runtime": 10.066449, 00:30:32.520 "iops": 12283.676200018497, 00:30:32.520 "mibps": 47.983110156322255, 00:30:32.520 "io_failed": 0, 00:30:32.520 "io_timeout": 0, 00:30:32.520 "avg_latency_us": 83076.47984225141, 00:30:32.520 "min_latency_us": 19603.812173913044, 00:30:32.520 "max_latency_us": 56531.92347826087 00:30:32.520 } 00:30:32.520 ], 00:30:32.520 "core_count": 1 00:30:32.520 } 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2942664 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2942664 ']' 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2942664 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942664 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942664' 00:30:32.520 killing process with pid 2942664 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2942664 00:30:32.520 Received shutdown signal, test time was about 10.000000 seconds 00:30:32.520 00:30:32.520 Latency(us) 00:30:32.520 [2024-11-20T15:23:33.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.520 [2024-11-20T15:23:33.357Z] =================================================================================================================== 00:30:32.520 [2024-11-20T15:23:33.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.520 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2942664 00:30:32.779 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:32.779 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:32.779 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.779 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:32.779 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.779 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.780 rmmod nvme_tcp 00:30:32.780 rmmod nvme_fabrics 00:30:32.780 rmmod nvme_keyring 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2942549 ']' 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2942549 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2942549 ']' 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2942549 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942549 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942549' 00:30:32.780 killing process with pid 2942549 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2942549 00:30:32.780 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2942549 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.039 16:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.945 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:34.945 00:30:34.945 real 0m19.690s 00:30:34.945 user 0m22.871s 00:30:34.945 sys 0m6.239s 00:30:34.945 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.945 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:34.945 ************************************ 00:30:34.945 END TEST nvmf_queue_depth 00:30:34.945 ************************************ 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:35.204 ************************************ 00:30:35.204 START TEST nvmf_target_multipath 00:30:35.204 ************************************ 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:35.204 * Looking for test storage... 00:30:35.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:35.204 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.205 --rc genhtml_branch_coverage=1 00:30:35.205 --rc genhtml_function_coverage=1 00:30:35.205 --rc genhtml_legend=1 00:30:35.205 --rc geninfo_all_blocks=1 00:30:35.205 --rc geninfo_unexecuted_blocks=1 00:30:35.205 00:30:35.205 ' 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.205 --rc genhtml_branch_coverage=1 00:30:35.205 --rc genhtml_function_coverage=1 00:30:35.205 --rc genhtml_legend=1 00:30:35.205 --rc geninfo_all_blocks=1 00:30:35.205 --rc geninfo_unexecuted_blocks=1 00:30:35.205 00:30:35.205 ' 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.205 --rc genhtml_branch_coverage=1 00:30:35.205 --rc genhtml_function_coverage=1 00:30:35.205 --rc genhtml_legend=1 00:30:35.205 --rc geninfo_all_blocks=1 00:30:35.205 --rc geninfo_unexecuted_blocks=1 00:30:35.205 00:30:35.205 ' 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.205 --rc genhtml_branch_coverage=1 00:30:35.205 --rc genhtml_function_coverage=1 00:30:35.205 --rc genhtml_legend=1 00:30:35.205 --rc geninfo_all_blocks=1 00:30:35.205 --rc geninfo_unexecuted_blocks=1 00:30:35.205 00:30:35.205 ' 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.205 16:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.205 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.206 16:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.776 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:41.777 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:41.777 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:41.777 Found net devices under 0000:86:00.0: cvl_0_0 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:41.777 Found net devices under 0000:86:00.1: cvl_0_1 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:30:41.777 00:30:41.777 --- 10.0.0.2 ping statistics --- 00:30:41.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.777 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:30:41.777 00:30:41.777 --- 10.0.0.1 ping statistics --- 00:30:41.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.777 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:41.777 only one NIC for nvmf test 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.777 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.778 rmmod nvme_tcp 00:30:41.778 rmmod nvme_fabrics 00:30:41.778 rmmod nvme_keyring 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.778 16:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.778 16:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.778 16:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.778 16:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.778 16:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.778 16:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.684 00:30:43.684 real 0m8.299s 00:30:43.684 user 0m1.815s 00:30:43.684 sys 0m4.509s 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:43.684 ************************************ 00:30:43.684 END TEST nvmf_target_multipath 00:30:43.684 ************************************ 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.684 ************************************ 00:30:43.684 START TEST nvmf_zcopy 00:30:43.684 ************************************ 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:43.684 * Looking for test storage... 00:30:43.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.684 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.685 --rc genhtml_branch_coverage=1 00:30:43.685 --rc genhtml_function_coverage=1 00:30:43.685 --rc genhtml_legend=1 00:30:43.685 --rc geninfo_all_blocks=1 00:30:43.685 --rc geninfo_unexecuted_blocks=1 00:30:43.685 00:30:43.685 ' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.685 --rc genhtml_branch_coverage=1 00:30:43.685 --rc genhtml_function_coverage=1 00:30:43.685 --rc genhtml_legend=1 00:30:43.685 --rc geninfo_all_blocks=1 00:30:43.685 --rc geninfo_unexecuted_blocks=1 00:30:43.685 00:30:43.685 ' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.685 --rc genhtml_branch_coverage=1 00:30:43.685 --rc genhtml_function_coverage=1 00:30:43.685 --rc genhtml_legend=1 00:30:43.685 --rc geninfo_all_blocks=1 00:30:43.685 --rc geninfo_unexecuted_blocks=1 00:30:43.685 00:30:43.685 ' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.685 --rc genhtml_branch_coverage=1 00:30:43.685 --rc genhtml_function_coverage=1 00:30:43.685 --rc genhtml_legend=1 00:30:43.685 --rc geninfo_all_blocks=1 00:30:43.685 --rc geninfo_unexecuted_blocks=1 00:30:43.685 00:30:43.685 ' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.685 16:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:50.251 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:50.251 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.251 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:50.252 Found net devices under 0000:86:00.0: cvl_0_0 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:50.252 Found net devices under 0000:86:00.1: cvl_0_1 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.252 16:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:30:50.252 00:30:50.252 --- 10.0.0.2 ping statistics --- 00:30:50.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.252 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:30:50.252 00:30:50.252 --- 10.0.0.1 ping statistics --- 00:30:50.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.252 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2951315 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2951315 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2951315 ']' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.252 [2024-11-20 16:23:50.271155] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.252 [2024-11-20 16:23:50.272142] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:50.252 [2024-11-20 16:23:50.272177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.252 [2024-11-20 16:23:50.354222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.252 [2024-11-20 16:23:50.393920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.252 [2024-11-20 16:23:50.393962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.252 [2024-11-20 16:23:50.393971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.252 [2024-11-20 16:23:50.393978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.252 [2024-11-20 16:23:50.393983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.252 [2024-11-20 16:23:50.394539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.252 [2024-11-20 16:23:50.463868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.252 [2024-11-20 16:23:50.464115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.252 [2024-11-20 16:23:50.539298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.252 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.253 [2024-11-20 16:23:50.567535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.253 malloc0 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:50.253 { 00:30:50.253 "params": { 00:30:50.253 "name": "Nvme$subsystem", 00:30:50.253 "trtype": "$TEST_TRANSPORT", 00:30:50.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.253 "adrfam": "ipv4", 00:30:50.253 "trsvcid": "$NVMF_PORT", 00:30:50.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.253 "hdgst": ${hdgst:-false}, 00:30:50.253 "ddgst": ${ddgst:-false} 00:30:50.253 }, 00:30:50.253 "method": "bdev_nvme_attach_controller" 00:30:50.253 } 00:30:50.253 EOF 00:30:50.253 )") 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:50.253 16:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:50.253 "params": { 00:30:50.253 "name": "Nvme1", 00:30:50.253 "trtype": "tcp", 00:30:50.253 "traddr": "10.0.0.2", 00:30:50.253 "adrfam": "ipv4", 00:30:50.253 "trsvcid": "4420", 00:30:50.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.253 "hdgst": false, 00:30:50.253 "ddgst": false 00:30:50.253 }, 00:30:50.253 "method": "bdev_nvme_attach_controller" 00:30:50.253 }' 00:30:50.253 [2024-11-20 16:23:50.663473] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:50.253 [2024-11-20 16:23:50.663526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951462 ] 00:30:50.253 [2024-11-20 16:23:50.743069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.253 [2024-11-20 16:23:50.784697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.253 Running I/O for 10 seconds... 00:30:52.569 8290.00 IOPS, 64.77 MiB/s [2024-11-20T15:23:54.344Z] 8330.50 IOPS, 65.08 MiB/s [2024-11-20T15:23:55.281Z] 8355.00 IOPS, 65.27 MiB/s [2024-11-20T15:23:56.218Z] 8373.50 IOPS, 65.42 MiB/s [2024-11-20T15:23:57.156Z] 8368.80 IOPS, 65.38 MiB/s [2024-11-20T15:23:58.093Z] 8377.00 IOPS, 65.45 MiB/s [2024-11-20T15:23:59.471Z] 8383.71 IOPS, 65.50 MiB/s [2024-11-20T15:24:00.407Z] 8388.00 IOPS, 65.53 MiB/s [2024-11-20T15:24:01.345Z] 8390.67 IOPS, 65.55 MiB/s [2024-11-20T15:24:01.345Z] 8380.90 IOPS, 65.48 MiB/s 00:31:00.508 Latency(us) 00:31:00.508 [2024-11-20T15:24:01.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.508 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:00.508 Verification LBA range: start 0x0 length 0x1000 00:31:00.508 Nvme1n1 : 10.01 8382.96 65.49 0.00 0.00 15225.58 1239.49 21883.33 00:31:00.508 [2024-11-20T15:24:01.345Z] =================================================================================================================== 00:31:00.508 [2024-11-20T15:24:01.345Z] Total : 8382.96 65.49 0.00 0.00 15225.58 1239.49 21883.33 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2953139 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:00.508 { 00:31:00.508 "params": { 00:31:00.508 "name": "Nvme$subsystem", 00:31:00.508 "trtype": "$TEST_TRANSPORT", 00:31:00.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.508 "adrfam": "ipv4", 00:31:00.508 "trsvcid": "$NVMF_PORT", 00:31:00.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.508 "hdgst": ${hdgst:-false}, 00:31:00.508 "ddgst": ${ddgst:-false} 00:31:00.508 }, 00:31:00.508 "method": "bdev_nvme_attach_controller" 00:31:00.508 } 00:31:00.508 EOF 00:31:00.508 )") 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:00.508 [2024-11-20 16:24:01.222891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.222924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:00.508 16:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:00.508 "params": { 00:31:00.508 "name": "Nvme1", 00:31:00.508 "trtype": "tcp", 00:31:00.508 "traddr": "10.0.0.2", 00:31:00.508 "adrfam": "ipv4", 00:31:00.508 "trsvcid": "4420", 00:31:00.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:00.508 "hdgst": false, 00:31:00.508 "ddgst": false 00:31:00.508 }, 00:31:00.508 "method": "bdev_nvme_attach_controller" 00:31:00.508 }' 00:31:00.508 [2024-11-20 16:24:01.234848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.234861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.246845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.246856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.258844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.258855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.261542] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:00.508 [2024-11-20 16:24:01.261586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953139 ] 00:31:00.508 [2024-11-20 16:24:01.270844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.270855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.282842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.282853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.294845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.294856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.306843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.306852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.318843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.318853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.330844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.508 [2024-11-20 16:24:01.330853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.508 [2024-11-20 16:24:01.338267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.768 [2024-11-20 16:24:01.342855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.342877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.354848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.354863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.366845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.366857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.378845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.378857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.380611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.768 [2024-11-20 16:24:01.390852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.390865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.402853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.402873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.414846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.414860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.426844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.426856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.438862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.438882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.450844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.450855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.462851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.462869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.474856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.474875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.486850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.486866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.498851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.498866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.510850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.510864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.522842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.522852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.534844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.534853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.546851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.546866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.558843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.558858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.570842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.570852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.582841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.582850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.768 [2024-11-20 16:24:01.594847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.768 [2024-11-20 16:24:01.594861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.606848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.606862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.618844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.618855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.630843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.630854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.642848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.642868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 Running I/O for 5 seconds... 00:31:01.027 [2024-11-20 16:24:01.658722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.658742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.673521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.673541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.688848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.688869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.704054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.704075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.719121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.719141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.729898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.729916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.744740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.744760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.760042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.760062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.775247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.775267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.790977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.790996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.804087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.804107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.819393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.819414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.835913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.835932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.027 [2024-11-20 16:24:01.850840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.027 [2024-11-20 16:24:01.850861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.861746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.861767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.876483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.876502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.891735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.891756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.907163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.907184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.920409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.920430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.935689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.935709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.950957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.950977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.965277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.965298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.980304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.980331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:01.995310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:01.995330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.010832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.010851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.022343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.022363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.036801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.036822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.051802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.051822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.062508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.062528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.076889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.076910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.091856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.091876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.106566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.106586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.288 [2024-11-20 16:24:02.120697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.288 [2024-11-20 16:24:02.120716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.136188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.136207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.151155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.151174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.166824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.166844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.178242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.178261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.192412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.192430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.207478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.207497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.222825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.222844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.236613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.236632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.252002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.252025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.266923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.266942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.277735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.277754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.292791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.292811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.307785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.547 [2024-11-20 16:24:02.307805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.547 [2024-11-20 16:24:02.322527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.548 [2024-11-20 16:24:02.322546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.548 [2024-11-20 16:24:02.336207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.548 [2024-11-20 16:24:02.336226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.548 [2024-11-20 16:24:02.347474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.548 [2024-11-20 16:24:02.347492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.548 [2024-11-20 16:24:02.362694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.548 [2024-11-20 16:24:02.362713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.548 [2024-11-20 16:24:02.374056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.548 [2024-11-20 16:24:02.374086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.388904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.388924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.403936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.403962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.418384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.418402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.432017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.432037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.447108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.447128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.457904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.457923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.472899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.472917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.487765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.487785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.503167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.503186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.517111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.517134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.532245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.532264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.547484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.547503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.562428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.562448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.573984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.574004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.589004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.589023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.604207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.604226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.619627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.619646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.807 [2024-11-20 16:24:02.634674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.807 [2024-11-20 16:24:02.634693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.066 [2024-11-20 16:24:02.649278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.066 [2024-11-20 16:24:02.649298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 16300.00 IOPS, 127.34 MiB/s [2024-11-20T15:24:02.904Z] [2024-11-20 16:24:02.664224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.664244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.678990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.679015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.690047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.690067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.704317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.704336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.719414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.719432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.735257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.735276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.750641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.750660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.762249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.762268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.776621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.776640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.791808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.791827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.807420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.807439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.818869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.818888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.832582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.832601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.847894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.847913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.862934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.862959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.874852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.874871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.067 [2024-11-20 16:24:02.889399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.067 [2024-11-20 16:24:02.889418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.904646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.904665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.920298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.920318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.935564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.935583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.951504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.951523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.967206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.967224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.982762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.982783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:02.995894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:02.995912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.010759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.010784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.022531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.022549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.036676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.036695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.051403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.051423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.066678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.066698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.079885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.079905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.091340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.091360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.104832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.104854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.119885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.119905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.326 [2024-11-20 16:24:03.135122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.326 [2024-11-20 16:24:03.135142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.327 [2024-11-20 16:24:03.147883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.327 [2024-11-20 16:24:03.147903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.327 [2024-11-20 16:24:03.159327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.327 [2024-11-20 16:24:03.159348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.174329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.174350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.188580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.188601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.204148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.204169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.219121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.219140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.230195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.230214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.244836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.244856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.259973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.260008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.274722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.274742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.287251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.287270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.302743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.302763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.315646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.315665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.331783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.331804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.346681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.346701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.358755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.358775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.372728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.372748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.388257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.388276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.403329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.403348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.586 [2024-11-20 16:24:03.418330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.586 [2024-11-20 16:24:03.418361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.845 [2024-11-20 16:24:03.432131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.845 [2024-11-20 16:24:03.432150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.845 [2024-11-20 16:24:03.443403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.845 [2024-11-20 16:24:03.443422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.845 [2024-11-20 16:24:03.456461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.845 [2024-11-20 16:24:03.456480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.845 [2024-11-20 16:24:03.471328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.845 [2024-11-20 16:24:03.471347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.486559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.486579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.501079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.501099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.516218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.516237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.531561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.531580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.547013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.547032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.559314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.559333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.574518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.574537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.587810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.587834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.603360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.603378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.619725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.619744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.635296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.635314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.650857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.650876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 16316.00 IOPS, 127.47 MiB/s [2024-11-20T15:24:03.683Z] [2024-11-20 16:24:03.664756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.664775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.846 [2024-11-20 16:24:03.679770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.846 [2024-11-20 16:24:03.679790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.694877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.694897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.705443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.705462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.720523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.720542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.735239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.735258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.750736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.750755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.764679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.764698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.779724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.779743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.794778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.794798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.808029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.808048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.822888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.822907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.835077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.835096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.848836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.848855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.864035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.864058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.878931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.878955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.892321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.892340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.907647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.907667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.922912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.922931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.105 [2024-11-20 16:24:03.934690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.105 [2024-11-20 16:24:03.934710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.364 [2024-11-20 16:24:03.949328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.364 [2024-11-20 16:24:03.949350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.364 [2024-11-20 16:24:03.964273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.364 [2024-11-20 16:24:03.964293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.364 [2024-11-20 16:24:03.979144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.364 [2024-11-20 16:24:03.979162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:03.990683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:03.990702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.004767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.004786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.020523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.020542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.035456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.035475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.050762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.050783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.061400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.061419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.076795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.076814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.091756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.091776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.107171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.107190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.120913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.120932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.136127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.136150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.151032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.151052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.162417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.162436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.176862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.176882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.365 [2024-11-20 16:24:04.192183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.365 [2024-11-20 16:24:04.192202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.207679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.207698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.223051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.223070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.235831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.235849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.251171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.251191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.263702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.263721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.276417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.276437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.291205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.291223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.302408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.302427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.316782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.316801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.332015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.332034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.346414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.346433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.360087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.360106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.371195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.371213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.385297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.385317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.400416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.400439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.415500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.415519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.430302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.430321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.624 [2024-11-20 16:24:04.445212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.624 [2024-11-20 16:24:04.445232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.460406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.460426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.475320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.475339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.491911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.491930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.506605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.506625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.519418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.519439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.535390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.535410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.550211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.550232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.561683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.561703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.576641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.576661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.591736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.591755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.606916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.606935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.619030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.619050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.632817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.632836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.647842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.647861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 16337.67 IOPS, 127.64 MiB/s [2024-11-20T15:24:04.721Z] [2024-11-20 16:24:04.662966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.662986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.676789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.676809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.884 [2024-11-20 16:24:04.692238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.884 [2024-11-20 16:24:04.692257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:03.885 [2024-11-20 16:24:04.707230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:03.885 [2024-11-20 16:24:04.707248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.173 [2024-11-20 16:24:04.720977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.173 [2024-11-20 16:24:04.720997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.173 [2024-11-20 16:24:04.736114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.173 [2024-11-20 16:24:04.736134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.173 [2024-11-20 16:24:04.751057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.751077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.763649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.763669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.776537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.776558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.791908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.791928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.806976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.806996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.820192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.820212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.835666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.835686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.851086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.851106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.863763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.863782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.879119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.879140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.890952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.890971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.904832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.904852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.920183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.920203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.935316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.935334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.947727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.947746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.959500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.959518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.972271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.972291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:04.987374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:04.987393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.174 [2024-11-20 16:24:05.002804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.174 [2024-11-20 16:24:05.002825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.433 [2024-11-20 16:24:05.015028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.433 [2024-11-20 16:24:05.015049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.433 [2024-11-20 16:24:05.029001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.433 [2024-11-20 16:24:05.029021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.433 [2024-11-20 16:24:05.044714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.433 [2024-11-20 16:24:05.044733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.059594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.059614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.074887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.074906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.086833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.086851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.100475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.100493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.115964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.115983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.130848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.130866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.143360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.143378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.156749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.156768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.172134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.172153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.187173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.187202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.203152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.203175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.218053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.218073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.232280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.232299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.247660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.247678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.434 [2024-11-20 16:24:05.262869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.434 [2024-11-20 16:24:05.262888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.277315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.277333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.292233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.292251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.307011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.307034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.318584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.318603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.333092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.333110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.348542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.348561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.363330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.363348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.378335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.378355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.693 [2024-11-20 16:24:05.389773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.693 [2024-11-20 16:24:05.389792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.404403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.404422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.419938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.419963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.434545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.434564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.445042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.445061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.460147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.460167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.475373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.475401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.487254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.487274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.500218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.500237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.694 [2024-11-20 16:24:05.515444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.694 [2024-11-20 16:24:05.515462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.531298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.531317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.543221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.543240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.556871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.556890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.571841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.571859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.586864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.586884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.599501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.599521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.611229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.611248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.624897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.624916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.640221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.640241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.654945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.654970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 16329.75 IOPS, 127.58 MiB/s [2024-11-20T15:24:05.790Z] [2024-11-20 16:24:05.669109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.669127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.684079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.684099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.699530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.699549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.714908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.714927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.729167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.729186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.744501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.744525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.759501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.759519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.770852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.770871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.953 [2024-11-20 16:24:05.784582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.953 [2024-11-20 16:24:05.784601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.799595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.799614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.815833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.815852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.831157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.831175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.847012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.847032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.859968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.859987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.875191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.875210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.887912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.887931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.903003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.903022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.213 [2024-11-20 16:24:05.914462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.213 [2024-11-20 16:24:05.914482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:05.929256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:05.929277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:05.944503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:05.944523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:05.959470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:05.959490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:05.974914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:05.974933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:05.989048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:05.989069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:06.004488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:06.004510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:06.019825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:06.019845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:06.035020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:06.035041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.214 [2024-11-20 16:24:06.046976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.214 [2024-11-20 16:24:06.046996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.060701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.060722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.076092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.076111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.091032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.091051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.103113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.103131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.116868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.116889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.132131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.132151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.147401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.147420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.162874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.162893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.474 [2024-11-20 16:24:06.176775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.474 [2024-11-20 16:24:06.176795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.192083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.192103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.208158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.208179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.223070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.223090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.234556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.234576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.249134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.249153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.264254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.264274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.279669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.279689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.475 [2024-11-20 16:24:06.295195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.475 [2024-11-20 16:24:06.295215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.311309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.311328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.323037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.323056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.336512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.336532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.351483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.351503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.366705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.366724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.379903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.379923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.395501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.395520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.407174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.407193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.420939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.420966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.436307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.436327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.451306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.451325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.462924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.462943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.476895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.476915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.492241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.492261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.507330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.507349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.518952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.518972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.532493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.532512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.547654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.547673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.735 [2024-11-20 16:24:06.562957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.735 [2024-11-20 16:24:06.562977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.573837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.573856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.588872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.588891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.604071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.604090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.618769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.618789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.633251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.633271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.648398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.648417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.663675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.663694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 16309.80 IOPS, 127.42 MiB/s 00:31:05.994 Latency(us) 00:31:05.994 [2024-11-20T15:24:06.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.994 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:05.994 Nvme1n1 : 5.00 16320.54 127.50 0.00 0.00 7836.62 2122.80 12936.24 00:31:05.994 [2024-11-20T15:24:06.831Z] =================================================================================================================== 00:31:05.994 [2024-11-20T15:24:06.831Z] Total : 16320.54 127.50 0.00 0.00 7836.62 2122.80 12936.24 00:31:05.994 [2024-11-20 16:24:06.674848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.674865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.686850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.686866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.698863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.698878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.710856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.710874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.722848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.722860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.734852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.734865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.746855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.746873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.758851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.758873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.770847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.770860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.994 [2024-11-20 16:24:06.782844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.994 [2024-11-20 16:24:06.782856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.995 [2024-11-20 16:24:06.794843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.995 [2024-11-20 16:24:06.794853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.995 [2024-11-20 16:24:06.806849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.995 [2024-11-20 16:24:06.806863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.995 [2024-11-20 16:24:06.818842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.995 [2024-11-20 16:24:06.818852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.255 [2024-11-20 16:24:06.830845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.255 [2024-11-20 16:24:06.830857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2953139) - No such process 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2953139 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:06.255 delay0 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.255 16:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:06.255 [2024-11-20 16:24:06.974371] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:12.826 Initializing NVMe Controllers 00:31:12.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:12.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:12.826 Initialization complete. Launching workers. 00:31:12.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1191 00:31:12.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1455, failed to submit 56 00:31:12.826 success 1323, unsuccessful 132, failed 0 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.826 rmmod nvme_tcp 00:31:12.826 rmmod nvme_fabrics 00:31:12.826 rmmod nvme_keyring 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2951315 ']' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2951315 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2951315 ']' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2951315 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951315 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951315' 00:31:12.826 killing process with pid 2951315 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2951315 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2951315 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.826 16:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.362 00:31:15.362 real 0m31.496s 00:31:15.362 user 0m40.866s 00:31:15.362 sys 0m12.314s 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.362 ************************************ 00:31:15.362 END TEST nvmf_zcopy 00:31:15.362 ************************************ 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.362 ************************************ 00:31:15.362 START TEST nvmf_nmic 00:31:15.362 ************************************ 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:15.362 * Looking for test storage... 00:31:15.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.362 --rc genhtml_branch_coverage=1 00:31:15.362 --rc genhtml_function_coverage=1 00:31:15.362 --rc genhtml_legend=1 00:31:15.362 --rc geninfo_all_blocks=1 00:31:15.362 --rc geninfo_unexecuted_blocks=1 00:31:15.362 00:31:15.362 ' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.362 --rc genhtml_branch_coverage=1 00:31:15.362 --rc genhtml_function_coverage=1 00:31:15.362 --rc genhtml_legend=1 00:31:15.362 --rc geninfo_all_blocks=1 00:31:15.362 --rc geninfo_unexecuted_blocks=1 00:31:15.362 00:31:15.362 ' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.362 --rc genhtml_branch_coverage=1 00:31:15.362 --rc genhtml_function_coverage=1 00:31:15.362 --rc genhtml_legend=1 00:31:15.362 --rc geninfo_all_blocks=1 00:31:15.362 --rc geninfo_unexecuted_blocks=1 00:31:15.362 00:31:15.362 ' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.362 --rc genhtml_branch_coverage=1 00:31:15.362 --rc genhtml_function_coverage=1 00:31:15.362 --rc genhtml_legend=1 00:31:15.362 --rc geninfo_all_blocks=1 00:31:15.362 --rc geninfo_unexecuted_blocks=1 00:31:15.362 00:31:15.362 ' 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.362 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.363 16:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:21.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:21.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:21.932 Found net devices under 0000:86:00.0: cvl_0_0 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:21.932 Found net devices under 0000:86:00.1: cvl_0_1 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.932 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:31:21.933 00:31:21.933 --- 10.0.0.2 ping statistics --- 00:31:21.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.933 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:21.933 00:31:21.933 --- 10.0.0.1 ping statistics --- 00:31:21.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.933 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2958938 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2958938 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2958938 ']' 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.933 16:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 [2024-11-20 16:24:21.893818] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.933 [2024-11-20 16:24:21.894757] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:21.933 [2024-11-20 16:24:21.894791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.933 [2024-11-20 16:24:21.974716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.933 [2024-11-20 16:24:22.019696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.933 [2024-11-20 16:24:22.019727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.933 [2024-11-20 16:24:22.019734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.933 [2024-11-20 16:24:22.019741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.933 [2024-11-20 16:24:22.019746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.933 [2024-11-20 16:24:22.021163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.933 [2024-11-20 16:24:22.021279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.933 [2024-11-20 16:24:22.021293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.933 [2024-11-20 16:24:22.021299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.933 [2024-11-20 16:24:22.090786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.933 [2024-11-20 16:24:22.091563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.933 [2024-11-20 16:24:22.091776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:21.933 [2024-11-20 16:24:22.092157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.933 [2024-11-20 16:24:22.092168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 [2024-11-20 16:24:22.170152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 Malloc0 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 [2024-11-20 16:24:22.258475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:21.933 test case1: single bdev can't be used in multiple subsystems 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:21.933 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.934 [2024-11-20 16:24:22.289835] bdev.c:8326:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:21.934 [2024-11-20 16:24:22.289859] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:21.934 [2024-11-20 16:24:22.289867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.934 request: 00:31:21.934 { 00:31:21.934 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:21.934 "namespace": { 00:31:21.934 "bdev_name": "Malloc0", 00:31:21.934 "no_auto_visible": false 00:31:21.934 }, 00:31:21.934 "method": "nvmf_subsystem_add_ns", 00:31:21.934 "req_id": 1 00:31:21.934 } 00:31:21.934 Got JSON-RPC error response 00:31:21.934 response: 00:31:21.934 { 00:31:21.934 "code": -32602, 00:31:21.934 "message": "Invalid parameters" 00:31:21.934 } 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:21.934 Adding namespace failed - expected result. 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:21.934 test case2: host connect to nvmf target in multiple paths 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:21.934 [2024-11-20 16:24:22.301940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:21.934 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:22.193 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:22.194 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:22.194 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:22.194 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:22.194 16:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:24.728 16:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:24.728 [global] 00:31:24.728 thread=1 00:31:24.728 invalidate=1 00:31:24.728 rw=write 00:31:24.728 time_based=1 00:31:24.728 runtime=1 00:31:24.728 ioengine=libaio 00:31:24.728 direct=1 00:31:24.728 bs=4096 00:31:24.728 iodepth=1 00:31:24.728 norandommap=0 00:31:24.728 numjobs=1 00:31:24.728 00:31:24.728 verify_dump=1 00:31:24.728 verify_backlog=512 00:31:24.728 verify_state_save=0 00:31:24.728 do_verify=1 00:31:24.728 verify=crc32c-intel 00:31:24.728 [job0] 00:31:24.728 filename=/dev/nvme0n1 00:31:24.728 Could not set queue depth (nvme0n1) 00:31:24.728 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.728 fio-3.35 00:31:24.728 Starting 1 thread 00:31:25.664 00:31:25.664 job0: (groupid=0, jobs=1): err= 0: pid=2959706: Wed Nov 20 16:24:26 2024 00:31:25.665 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:31:25.665 slat (nsec): min=9557, max=23237, avg=22015.78, stdev=2727.87 00:31:25.665 clat (usec): min=40870, max=41960, avg=41030.21, stdev=232.79 00:31:25.665 lat (usec): min=40892, max=41982, avg=41052.23, stdev=231.67 00:31:25.665 clat percentiles (usec): 00:31:25.665 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:25.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:25.665 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:25.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:25.665 | 99.99th=[42206] 00:31:25.665 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:31:25.665 slat (nsec): min=8796, max=40068, avg=9974.65, stdev=2028.10 00:31:25.665 clat (usec): min=130, max=372, avg=145.39, stdev=27.14 00:31:25.665 lat (usec): min=139, max=412, avg=155.37, stdev=27.84 00:31:25.665 clat percentiles (usec): 00:31:25.665 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:31:25.665 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:31:25.665 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 241], 00:31:25.665 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 371], 99.95th=[ 371], 00:31:25.665 | 99.99th=[ 371] 00:31:25.665 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:25.665 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:25.665 lat (usec) : 250=95.14%, 500=0.56% 00:31:25.665 lat (msec) : 50=4.30% 00:31:25.665 cpu : usr=0.49%, sys=0.20%, ctx=535, majf=0, minf=1 00:31:25.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.665 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:25.665 00:31:25.665 Run status group 0 (all jobs): 00:31:25.665 READ: bw=89.8KiB/s (91.9kB/s), 89.8KiB/s-89.8KiB/s (91.9kB/s-91.9kB/s), io=92.0KiB (94.2kB), run=1025-1025msec 00:31:25.665 WRITE: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2048KiB (2097kB), run=1025-1025msec 00:31:25.665 00:31:25.665 Disk stats (read/write): 00:31:25.665 nvme0n1: ios=69/512, merge=0/0, ticks=888/69, in_queue=957, util=95.59% 00:31:25.665 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:25.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.924 rmmod nvme_tcp 00:31:25.924 rmmod nvme_fabrics 00:31:25.924 rmmod nvme_keyring 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2958938 ']' 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2958938 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2958938 ']' 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2958938 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958938 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958938' 00:31:25.924 killing process with pid 2958938 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2958938 00:31:25.924 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2958938 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.184 16:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.723 16:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:28.723 00:31:28.723 real 0m13.214s 00:31:28.723 user 0m24.740s 00:31:28.723 sys 0m5.958s 00:31:28.723 16:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.723 16:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.723 ************************************ 00:31:28.723 END TEST nvmf_nmic 00:31:28.723 ************************************ 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:28.723 ************************************ 00:31:28.723 START TEST nvmf_fio_target 00:31:28.723 ************************************ 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:28.723 * Looking for test storage... 00:31:28.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:28.723 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:28.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.724 --rc genhtml_branch_coverage=1 00:31:28.724 --rc genhtml_function_coverage=1 00:31:28.724 --rc genhtml_legend=1 00:31:28.724 --rc geninfo_all_blocks=1 00:31:28.724 --rc geninfo_unexecuted_blocks=1 00:31:28.724 00:31:28.724 ' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:28.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.724 --rc genhtml_branch_coverage=1 00:31:28.724 --rc genhtml_function_coverage=1 00:31:28.724 --rc genhtml_legend=1 00:31:28.724 --rc geninfo_all_blocks=1 00:31:28.724 --rc geninfo_unexecuted_blocks=1 00:31:28.724 00:31:28.724 ' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:28.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.724 --rc genhtml_branch_coverage=1 00:31:28.724 --rc genhtml_function_coverage=1 00:31:28.724 --rc genhtml_legend=1 00:31:28.724 --rc geninfo_all_blocks=1 00:31:28.724 --rc geninfo_unexecuted_blocks=1 00:31:28.724 00:31:28.724 ' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:28.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.724 --rc genhtml_branch_coverage=1 00:31:28.724 --rc genhtml_function_coverage=1 00:31:28.724 --rc genhtml_legend=1 00:31:28.724 --rc geninfo_all_blocks=1 00:31:28.724 --rc geninfo_unexecuted_blocks=1 00:31:28.724 00:31:28.724 ' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:28.724 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.725 16:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.100 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.100 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.100 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.100 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.100 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:34.101 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:34.101 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:34.101 Found net devices under 0000:86:00.0: cvl_0_0 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:34.101 Found net devices under 0000:86:00.1: cvl_0_1 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.101 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.102 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.102 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.361 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.361 16:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:31:34.361 00:31:34.361 --- 10.0.0.2 ping statistics --- 00:31:34.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.361 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:34.361 00:31:34.361 --- 10.0.0.1 ping statistics --- 00:31:34.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.361 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:34.361 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2963317 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2963317 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2963317 ']' 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.362 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.621 [2024-11-20 16:24:35.202675] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.621 [2024-11-20 16:24:35.203617] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:34.621 [2024-11-20 16:24:35.203652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.621 [2024-11-20 16:24:35.283626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.621 [2024-11-20 16:24:35.326246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.621 [2024-11-20 16:24:35.326283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.621 [2024-11-20 16:24:35.326290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.621 [2024-11-20 16:24:35.326296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.621 [2024-11-20 16:24:35.326301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.621 [2024-11-20 16:24:35.327814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.621 [2024-11-20 16:24:35.327922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.621 [2024-11-20 16:24:35.328033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.621 [2024-11-20 16:24:35.328033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.621 [2024-11-20 16:24:35.396858] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:34.621 [2024-11-20 16:24:35.397275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:34.621 [2024-11-20 16:24:35.397744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:34.621 [2024-11-20 16:24:35.398194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:34.621 [2024-11-20 16:24:35.398244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:34.621 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.621 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:34.621 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.621 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.621 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.881 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.881 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:34.881 [2024-11-20 16:24:35.636773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.881 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.140 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:35.140 16:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.398 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:35.398 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.656 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:35.656 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.915 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:35.915 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:36.174 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.174 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:36.174 16:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.432 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:36.432 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.690 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:36.690 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:36.949 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:36.949 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:36.949 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:37.206 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:37.206 16:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:37.464 16:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.723 [2024-11-20 16:24:38.324697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.723 16:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:37.981 16:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:37.981 16:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:38.240 16:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:38.240 16:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:38.240 16:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:38.241 16:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:38.241 16:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:38.241 16:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:40.771 16:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:40.771 [global] 00:31:40.771 thread=1 00:31:40.771 invalidate=1 00:31:40.771 rw=write 00:31:40.771 time_based=1 00:31:40.771 runtime=1 00:31:40.771 ioengine=libaio 00:31:40.771 direct=1 00:31:40.771 bs=4096 00:31:40.771 iodepth=1 00:31:40.771 norandommap=0 00:31:40.771 numjobs=1 00:31:40.771 00:31:40.771 verify_dump=1 00:31:40.771 verify_backlog=512 00:31:40.771 verify_state_save=0 00:31:40.771 do_verify=1 00:31:40.771 verify=crc32c-intel 00:31:40.771 [job0] 00:31:40.771 filename=/dev/nvme0n1 00:31:40.771 [job1] 00:31:40.771 filename=/dev/nvme0n2 00:31:40.771 [job2] 00:31:40.771 filename=/dev/nvme0n3 00:31:40.771 [job3] 00:31:40.771 filename=/dev/nvme0n4 00:31:40.771 Could not set queue depth (nvme0n1) 00:31:40.771 Could not set queue depth (nvme0n2) 00:31:40.771 Could not set queue depth (nvme0n3) 00:31:40.771 Could not set queue depth (nvme0n4) 00:31:40.771 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.771 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.771 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.771 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.771 fio-3.35 00:31:40.771 Starting 4 threads 00:31:42.146 00:31:42.146 job0: (groupid=0, jobs=1): err= 0: pid=2964599: Wed Nov 20 16:24:42 2024 00:31:42.146 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:31:42.146 slat (nsec): min=10430, max=25796, avg=23657.95, stdev=3013.18 00:31:42.146 clat (usec): min=40913, max=42108, avg=41295.92, stdev=473.88 00:31:42.146 lat (usec): min=40938, max=42134, avg=41319.58, stdev=474.12 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:42.146 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:42.146 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.146 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:42.146 | 99.99th=[42206] 00:31:42.146 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:42.146 slat (nsec): min=10676, max=35836, avg=12245.31, stdev=1633.04 00:31:42.146 clat (usec): min=144, max=300, avg=167.21, stdev=12.98 00:31:42.146 lat (usec): min=155, max=336, avg=179.45, stdev=13.71 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 159], 00:31:42.146 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:31:42.146 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:31:42.146 | 99.00th=[ 212], 99.50th=[ 241], 99.90th=[ 302], 99.95th=[ 302], 00:31:42.146 | 99.99th=[ 302] 00:31:42.146 bw ( KiB/s): min= 4096, max= 4096, per=17.12%, avg=4096.00, stdev= 0.00, samples=1 00:31:42.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:42.146 lat (usec) : 250=95.51%, 500=0.37% 00:31:42.146 lat (msec) : 50=4.12% 00:31:42.146 cpu : usr=0.30%, sys=1.09%, ctx=537, majf=0, minf=1 00:31:42.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.146 job1: (groupid=0, jobs=1): err= 0: pid=2964614: Wed Nov 20 16:24:42 2024 00:31:42.146 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:31:42.146 slat (nsec): min=10911, max=14814, avg=12504.09, stdev=1211.84 00:31:42.146 clat (usec): min=40546, max=41063, avg=40965.16, stdev=103.11 00:31:42.146 lat (usec): min=40557, max=41074, avg=40977.66, stdev=103.29 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:42.146 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:42.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:42.146 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:42.146 | 99.99th=[41157] 00:31:42.146 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:31:42.146 slat (nsec): min=11592, max=35980, avg=14163.07, stdev=2094.05 00:31:42.146 clat (usec): min=141, max=378, avg=195.44, stdev=23.44 00:31:42.146 lat (usec): min=154, max=391, avg=209.61, stdev=23.69 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 184], 00:31:42.146 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:31:42.146 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 235], 00:31:42.146 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 379], 99.95th=[ 379], 00:31:42.146 | 99.99th=[ 379] 00:31:42.146 bw ( KiB/s): min= 4096, max= 4096, per=17.12%, avg=4096.00, stdev= 0.00, samples=1 00:31:42.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:42.146 lat (usec) : 250=92.51%, 500=3.37% 00:31:42.146 lat (msec) : 50=4.12% 00:31:42.146 cpu : usr=0.39%, sys=1.09%, ctx=535, majf=0, minf=1 00:31:42.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.146 job2: (groupid=0, jobs=1): err= 0: pid=2964634: Wed Nov 20 16:24:42 2024 00:31:42.146 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:42.146 slat (nsec): min=7144, max=61890, avg=8341.58, stdev=2034.31 00:31:42.146 clat (usec): min=209, max=427, avg=242.99, stdev=12.49 00:31:42.146 lat (usec): min=217, max=436, avg=251.33, stdev=12.61 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 235], 00:31:42.146 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:31:42.146 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:31:42.146 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 408], 99.95th=[ 416], 00:31:42.146 | 99.99th=[ 429] 00:31:42.146 write: IOPS=2477, BW=9910KiB/s (10.1MB/s)(9920KiB/1001msec); 0 zone resets 00:31:42.146 slat (nsec): min=10084, max=50982, avg=11897.28, stdev=2648.53 00:31:42.146 clat (usec): min=140, max=318, avg=178.38, stdev=14.38 00:31:42.146 lat (usec): min=150, max=330, avg=190.28, stdev=15.34 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:31:42.146 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:31:42.146 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 202], 00:31:42.146 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 269], 99.95th=[ 297], 00:31:42.146 | 99.99th=[ 318] 00:31:42.146 bw ( KiB/s): min= 9720, max= 9720, per=40.63%, avg=9720.00, stdev= 0.00, samples=1 00:31:42.146 iops : min= 2430, max= 2430, avg=2430.00, stdev= 0.00, samples=1 00:31:42.146 lat (usec) : 250=90.70%, 500=9.30% 00:31:42.146 cpu : usr=3.50%, sys=7.60%, ctx=4528, majf=0, minf=2 00:31:42.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 issued rwts: total=2048,2480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.146 job3: (groupid=0, jobs=1): err= 0: pid=2964640: Wed Nov 20 16:24:42 2024 00:31:42.146 read: IOPS=2219, BW=8879KiB/s (9092kB/s)(8888KiB/1001msec) 00:31:42.146 slat (nsec): min=6474, max=27599, avg=7523.72, stdev=921.50 00:31:42.146 clat (usec): min=208, max=347, avg=234.08, stdev=11.53 00:31:42.146 lat (usec): min=215, max=358, avg=241.60, stdev=11.69 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 225], 00:31:42.146 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 235], 00:31:42.146 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 251], 00:31:42.146 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 334], 99.95th=[ 347], 00:31:42.146 | 99.99th=[ 347] 00:31:42.146 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:42.146 slat (nsec): min=9375, max=70084, avg=10923.44, stdev=1866.78 00:31:42.146 clat (usec): min=135, max=298, avg=166.20, stdev=17.15 00:31:42.146 lat (usec): min=145, max=368, avg=177.12, stdev=18.03 00:31:42.146 clat percentiles (usec): 00:31:42.146 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:31:42.146 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:31:42.146 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 202], 00:31:42.146 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 255], 99.95th=[ 281], 00:31:42.146 | 99.99th=[ 297] 00:31:42.146 bw ( KiB/s): min=10816, max=10816, per=45.22%, avg=10816.00, stdev= 0.00, samples=1 00:31:42.146 iops : min= 2704, max= 2704, avg=2704.00, stdev= 0.00, samples=1 00:31:42.146 lat (usec) : 250=97.03%, 500=2.97% 00:31:42.146 cpu : usr=2.50%, sys=4.40%, ctx=4782, majf=0, minf=1 00:31:42.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.146 issued rwts: total=2222,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.146 00:31:42.146 Run status group 0 (all jobs): 00:31:42.146 READ: bw=16.6MiB/s (17.4MB/s), 86.8KiB/s-8879KiB/s (88.9kB/s-9092kB/s), io=16.9MiB (17.7MB), run=1001-1014msec 00:31:42.146 WRITE: bw=23.4MiB/s (24.5MB/s), 2020KiB/s-9.99MiB/s (2068kB/s-10.5MB/s), io=23.7MiB (24.8MB), run=1001-1014msec 00:31:42.146 00:31:42.146 Disk stats (read/write): 00:31:42.146 nvme0n1: ios=42/512, merge=0/0, ticks=1609/79, in_queue=1688, util=84.77% 00:31:42.146 nvme0n2: ios=40/512, merge=0/0, ticks=1600/95, in_queue=1695, util=88.80% 00:31:42.146 nvme0n3: ios=1817/2048, merge=0/0, ticks=474/349, in_queue=823, util=94.86% 00:31:42.146 nvme0n4: ios=2012/2048, merge=0/0, ticks=529/334, in_queue=863, util=95.45% 00:31:42.146 16:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:42.146 [global] 00:31:42.146 thread=1 00:31:42.146 invalidate=1 00:31:42.146 rw=randwrite 00:31:42.146 time_based=1 00:31:42.146 runtime=1 00:31:42.146 ioengine=libaio 00:31:42.146 direct=1 00:31:42.146 bs=4096 00:31:42.146 iodepth=1 00:31:42.146 norandommap=0 00:31:42.146 numjobs=1 00:31:42.146 00:31:42.146 verify_dump=1 00:31:42.146 verify_backlog=512 00:31:42.146 verify_state_save=0 00:31:42.146 do_verify=1 00:31:42.146 verify=crc32c-intel 00:31:42.146 [job0] 00:31:42.146 filename=/dev/nvme0n1 00:31:42.146 [job1] 00:31:42.147 filename=/dev/nvme0n2 00:31:42.147 [job2] 00:31:42.147 filename=/dev/nvme0n3 00:31:42.147 [job3] 00:31:42.147 filename=/dev/nvme0n4 00:31:42.147 Could not set queue depth (nvme0n1) 00:31:42.147 Could not set queue depth (nvme0n2) 00:31:42.147 Could not set queue depth (nvme0n3) 00:31:42.147 Could not set queue depth (nvme0n4) 00:31:42.405 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.405 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.405 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.405 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.405 fio-3.35 00:31:42.405 Starting 4 threads 00:31:43.781 00:31:43.781 job0: (groupid=0, jobs=1): err= 0: pid=2965019: Wed Nov 20 16:24:44 2024 00:31:43.781 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:31:43.781 slat (nsec): min=10051, max=22172, avg=21316.36, stdev=2528.96 00:31:43.781 clat (usec): min=40859, max=41256, avg=40980.31, stdev=74.62 00:31:43.781 lat (usec): min=40881, max=41267, avg=41001.62, stdev=72.51 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:43.781 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.781 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:43.781 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:43.781 | 99.99th=[41157] 00:31:43.781 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:31:43.781 slat (nsec): min=9642, max=62994, avg=10941.18, stdev=2793.17 00:31:43.781 clat (usec): min=136, max=261, avg=192.39, stdev=14.81 00:31:43.781 lat (usec): min=147, max=293, avg=203.33, stdev=15.35 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[ 147], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 184], 00:31:43.781 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:31:43.781 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:31:43.781 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 262], 99.95th=[ 262], 00:31:43.781 | 99.99th=[ 262] 00:31:43.781 bw ( KiB/s): min= 4087, max= 4087, per=50.29%, avg=4087.00, stdev= 0.00, samples=1 00:31:43.781 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:43.781 lat (usec) : 250=95.51%, 500=0.37% 00:31:43.781 lat (msec) : 50=4.12% 00:31:43.781 cpu : usr=0.60%, sys=0.60%, ctx=535, majf=0, minf=1 00:31:43.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.781 job1: (groupid=0, jobs=1): err= 0: pid=2965022: Wed Nov 20 16:24:44 2024 00:31:43.781 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:31:43.781 slat (nsec): min=9731, max=23999, avg=22174.73, stdev=2909.92 00:31:43.781 clat (usec): min=40846, max=41921, avg=41007.93, stdev=211.03 00:31:43.781 lat (usec): min=40870, max=41945, avg=41030.10, stdev=211.58 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:43.781 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.781 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:43.781 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:43.781 | 99.99th=[41681] 00:31:43.781 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:31:43.781 slat (nsec): min=9396, max=36673, avg=10937.93, stdev=1812.31 00:31:43.781 clat (usec): min=165, max=286, avg=183.36, stdev=11.94 00:31:43.781 lat (usec): min=175, max=323, avg=194.30, stdev=12.55 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:31:43.781 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:31:43.781 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 198], 00:31:43.781 | 99.00th=[ 229], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 289], 00:31:43.781 | 99.99th=[ 289] 00:31:43.781 bw ( KiB/s): min= 4096, max= 4096, per=50.40%, avg=4096.00, stdev= 0.00, samples=1 00:31:43.781 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:43.781 lat (usec) : 250=95.32%, 500=0.56% 00:31:43.781 lat (msec) : 50=4.12% 00:31:43.781 cpu : usr=0.90%, sys=0.40%, ctx=534, majf=0, minf=1 00:31:43.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.781 job2: (groupid=0, jobs=1): err= 0: pid=2965023: Wed Nov 20 16:24:44 2024 00:31:43.781 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:31:43.781 slat (nsec): min=10782, max=25964, avg=24742.59, stdev=3134.78 00:31:43.781 clat (usec): min=40675, max=41019, avg=40946.79, stdev=68.55 00:31:43.781 lat (usec): min=40686, max=41045, avg=40971.54, stdev=71.30 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:43.781 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.781 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:43.781 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:43.781 | 99.99th=[41157] 00:31:43.781 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:31:43.781 slat (nsec): min=9203, max=53984, avg=12044.34, stdev=2498.14 00:31:43.781 clat (usec): min=141, max=357, avg=191.74, stdev=17.84 00:31:43.781 lat (usec): min=152, max=368, avg=203.79, stdev=18.19 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 182], 00:31:43.781 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:31:43.781 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:31:43.781 | 99.00th=[ 243], 99.50th=[ 285], 99.90th=[ 359], 99.95th=[ 359], 00:31:43.781 | 99.99th=[ 359] 00:31:43.781 bw ( KiB/s): min= 4087, max= 4087, per=50.29%, avg=4087.00, stdev= 0.00, samples=1 00:31:43.781 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:43.781 lat (usec) : 250=94.94%, 500=0.94% 00:31:43.781 lat (msec) : 50=4.12% 00:31:43.781 cpu : usr=0.60%, sys=0.79%, ctx=535, majf=0, minf=1 00:31:43.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.781 job3: (groupid=0, jobs=1): err= 0: pid=2965024: Wed Nov 20 16:24:44 2024 00:31:43.781 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:31:43.781 slat (nsec): min=9493, max=55113, avg=22874.22, stdev=7887.61 00:31:43.781 clat (usec): min=243, max=41122, avg=39181.78, stdev=8488.93 00:31:43.781 lat (usec): min=253, max=41145, avg=39204.65, stdev=8491.56 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[ 243], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:31:43.781 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:43.781 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:43.781 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:43.781 | 99.99th=[41157] 00:31:43.781 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:31:43.781 slat (nsec): min=11835, max=68045, avg=14331.92, stdev=3955.75 00:31:43.781 clat (usec): min=147, max=331, avg=180.68, stdev=14.34 00:31:43.781 lat (usec): min=174, max=373, avg=195.02, stdev=15.57 00:31:43.781 clat percentiles (usec): 00:31:43.781 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 174], 00:31:43.781 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:31:43.781 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 192], 95.00th=[ 198], 00:31:43.781 | 99.00th=[ 225], 99.50th=[ 293], 99.90th=[ 334], 99.95th=[ 334], 00:31:43.781 | 99.99th=[ 334] 00:31:43.781 bw ( KiB/s): min= 4096, max= 4096, per=50.40%, avg=4096.00, stdev= 0.00, samples=1 00:31:43.781 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:43.781 lat (usec) : 250=95.14%, 500=0.75% 00:31:43.781 lat (msec) : 50=4.11% 00:31:43.781 cpu : usr=0.20%, sys=1.40%, ctx=536, majf=0, minf=1 00:31:43.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.781 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.781 00:31:43.781 Run status group 0 (all jobs): 00:31:43.781 READ: bw=353KiB/s (362kB/s), 87.3KiB/s-91.6KiB/s (89.4kB/s-93.8kB/s), io=356KiB (365kB), run=1004-1008msec 00:31:43.781 WRITE: bw=8127KiB/s (8322kB/s), 2032KiB/s-2040KiB/s (2081kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1008msec 00:31:43.781 00:31:43.781 Disk stats (read/write): 00:31:43.781 nvme0n1: ios=68/512, merge=0/0, ticks=764/95, in_queue=859, util=86.77% 00:31:43.781 nvme0n2: ios=18/512, merge=0/0, ticks=739/86, in_queue=825, util=86.89% 00:31:43.781 nvme0n3: ios=60/512, merge=0/0, ticks=1650/92, in_queue=1742, util=98.75% 00:31:43.781 nvme0n4: ios=76/512, merge=0/0, ticks=1071/88, in_queue=1159, util=98.11% 00:31:43.781 16:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:43.781 [global] 00:31:43.781 thread=1 00:31:43.781 invalidate=1 00:31:43.781 rw=write 00:31:43.781 time_based=1 00:31:43.781 runtime=1 00:31:43.781 ioengine=libaio 00:31:43.781 direct=1 00:31:43.781 bs=4096 00:31:43.781 iodepth=128 00:31:43.781 norandommap=0 00:31:43.781 numjobs=1 00:31:43.781 00:31:43.781 verify_dump=1 00:31:43.782 verify_backlog=512 00:31:43.782 verify_state_save=0 00:31:43.782 do_verify=1 00:31:43.782 verify=crc32c-intel 00:31:43.782 [job0] 00:31:43.782 filename=/dev/nvme0n1 00:31:43.782 [job1] 00:31:43.782 filename=/dev/nvme0n2 00:31:43.782 [job2] 00:31:43.782 filename=/dev/nvme0n3 00:31:43.782 [job3] 00:31:43.782 filename=/dev/nvme0n4 00:31:43.782 Could not set queue depth (nvme0n1) 00:31:43.782 Could not set queue depth (nvme0n2) 00:31:43.782 Could not set queue depth (nvme0n3) 00:31:43.782 Could not set queue depth (nvme0n4) 00:31:43.782 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.782 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.782 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.782 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:43.782 fio-3.35 00:31:43.782 Starting 4 threads 00:31:45.156 00:31:45.156 job0: (groupid=0, jobs=1): err= 0: pid=2965391: Wed Nov 20 16:24:45 2024 00:31:45.156 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:31:45.156 slat (nsec): min=1033, max=13294k, avg=96871.02, stdev=817474.88 00:31:45.156 clat (usec): min=1063, max=37987, avg=12937.66, stdev=5487.40 00:31:45.156 lat (usec): min=1071, max=38011, avg=13034.53, stdev=5568.47 00:31:45.156 clat percentiles (usec): 00:31:45.156 | 1.00th=[ 1827], 5.00th=[ 5145], 10.00th=[ 8291], 20.00th=[ 9765], 00:31:45.156 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11207], 60.00th=[12911], 00:31:45.156 | 70.00th=[13960], 80.00th=[15664], 90.00th=[21103], 95.00th=[24773], 00:31:45.156 | 99.00th=[29492], 99.50th=[30278], 99.90th=[33817], 99.95th=[34341], 00:31:45.156 | 99.99th=[38011] 00:31:45.156 write: IOPS=3890, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1009msec); 0 zone resets 00:31:45.156 slat (usec): min=2, max=11756, avg=133.00, stdev=784.73 00:31:45.156 clat (usec): min=507, max=98685, avg=20762.01, stdev=22936.25 00:31:45.156 lat (usec): min=554, max=98695, avg=20895.01, stdev=23090.43 00:31:45.156 clat percentiles (usec): 00:31:45.156 | 1.00th=[ 1188], 5.00th=[ 3458], 10.00th=[ 4555], 20.00th=[ 7308], 00:31:45.156 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[10814], 60.00th=[11076], 00:31:45.156 | 70.00th=[15401], 80.00th=[31065], 90.00th=[58983], 95.00th=[80217], 00:31:45.156 | 99.00th=[93848], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:31:45.156 | 99.99th=[99091] 00:31:45.156 bw ( KiB/s): min=10472, max=19912, per=22.91%, avg=15192.00, stdev=6675.09, samples=2 00:31:45.156 iops : min= 2618, max= 4978, avg=3798.00, stdev=1668.77, samples=2 00:31:45.156 lat (usec) : 750=0.11%, 1000=0.32% 00:31:45.156 lat (msec) : 2=0.97%, 4=4.45%, 10=26.42%, 20=49.47%, 50=10.69% 00:31:45.156 lat (msec) : 100=7.58% 00:31:45.156 cpu : usr=2.88%, sys=3.27%, ctx=440, majf=0, minf=1 00:31:45.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:45.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.156 issued rwts: total=3584,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.156 job1: (groupid=0, jobs=1): err= 0: pid=2965392: Wed Nov 20 16:24:45 2024 00:31:45.156 read: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec) 00:31:45.156 slat (nsec): min=1724, max=10865k, avg=115871.70, stdev=803378.87 00:31:45.156 clat (usec): min=2222, max=81954, avg=13120.14, stdev=8372.09 00:31:45.157 lat (usec): min=4741, max=81966, avg=13236.01, stdev=8483.50 00:31:45.157 clat percentiles (usec): 00:31:45.157 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9634], 00:31:45.157 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[11731], 00:31:45.157 | 70.00th=[12518], 80.00th=[14222], 90.00th=[17171], 95.00th=[24511], 00:31:45.157 | 99.00th=[54789], 99.50th=[68682], 99.90th=[82314], 99.95th=[82314], 00:31:45.157 | 99.99th=[82314] 00:31:45.157 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:31:45.157 slat (nsec): min=1910, max=15501k, avg=126252.21, stdev=712016.30 00:31:45.157 clat (usec): min=1317, max=85942, avg=19219.43, stdev=16414.72 00:31:45.157 lat (usec): min=1327, max=85948, avg=19345.69, stdev=16511.21 00:31:45.157 clat percentiles (usec): 00:31:45.157 | 1.00th=[ 4752], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 8160], 00:31:45.157 | 30.00th=[ 9110], 40.00th=[10683], 50.00th=[13304], 60.00th=[15401], 00:31:45.157 | 70.00th=[21365], 80.00th=[25560], 90.00th=[36963], 95.00th=[60031], 00:31:45.157 | 99.00th=[78119], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:31:45.157 | 99.99th=[85459] 00:31:45.157 bw ( KiB/s): min=16368, max=16384, per=24.70%, avg=16376.00, stdev=11.31, samples=2 00:31:45.157 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:31:45.157 lat (msec) : 2=0.04%, 4=0.32%, 10=32.05%, 20=46.98%, 50=15.33% 00:31:45.157 lat (msec) : 100=5.28% 00:31:45.157 cpu : usr=2.79%, sys=5.78%, ctx=311, majf=0, minf=2 00:31:45.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:45.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.157 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.157 job2: (groupid=0, jobs=1): err= 0: pid=2965394: Wed Nov 20 16:24:45 2024 00:31:45.157 read: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1008msec) 00:31:45.157 slat (nsec): min=1108, max=10458k, avg=84252.66, stdev=690083.04 00:31:45.157 clat (usec): min=2514, max=21798, avg=10662.19, stdev=2758.94 00:31:45.157 lat (usec): min=3393, max=28863, avg=10746.44, stdev=2818.75 00:31:45.157 clat percentiles (usec): 00:31:45.157 | 1.00th=[ 4555], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8586], 00:31:45.157 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10945], 00:31:45.157 | 70.00th=[11469], 80.00th=[12387], 90.00th=[14222], 95.00th=[15795], 00:31:45.157 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:31:45.157 | 99.99th=[21890] 00:31:45.157 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:31:45.157 slat (nsec): min=1881, max=26083k, avg=77335.52, stdev=566270.23 00:31:45.157 clat (usec): min=1448, max=42763, avg=10569.31, stdev=4590.53 00:31:45.157 lat (usec): min=1465, max=42776, avg=10646.65, stdev=4616.92 00:31:45.157 clat percentiles (usec): 00:31:45.157 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6521], 20.00th=[ 8094], 00:31:45.157 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10683], 00:31:45.157 | 70.00th=[11469], 80.00th=[11994], 90.00th=[14484], 95.00th=[16909], 00:31:45.157 | 99.00th=[38011], 99.50th=[40109], 99.90th=[42206], 99.95th=[42730], 00:31:45.157 | 99.99th=[42730] 00:31:45.157 bw ( KiB/s): min=24576, max=24576, per=37.06%, avg=24576.00, stdev= 0.00, samples=2 00:31:45.157 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:31:45.157 lat (msec) : 2=0.13%, 4=0.76%, 10=50.13%, 20=47.40%, 50=1.57% 00:31:45.157 cpu : usr=4.17%, sys=5.76%, ctx=552, majf=0, minf=1 00:31:45.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:45.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.157 issued rwts: total=5876,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.157 job3: (groupid=0, jobs=1): err= 0: pid=2965395: Wed Nov 20 16:24:45 2024 00:31:45.157 read: IOPS=2218, BW=8872KiB/s (9085kB/s)(8952KiB/1009msec) 00:31:45.157 slat (nsec): min=1228, max=26064k, avg=158884.80, stdev=1374589.35 00:31:45.157 clat (usec): min=792, max=78604, avg=21913.15, stdev=11432.47 00:31:45.157 lat (usec): min=4807, max=83533, avg=22072.03, stdev=11540.61 00:31:45.157 clat percentiles (usec): 00:31:45.157 | 1.00th=[ 4948], 5.00th=[ 6980], 10.00th=[ 8717], 20.00th=[12649], 00:31:45.157 | 30.00th=[13960], 40.00th=[15926], 50.00th=[22152], 60.00th=[23462], 00:31:45.157 | 70.00th=[27657], 80.00th=[28967], 90.00th=[33817], 95.00th=[42206], 00:31:45.157 | 99.00th=[62129], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:31:45.157 | 99.99th=[78119] 00:31:45.157 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:31:45.157 slat (usec): min=2, max=16049, avg=234.42, stdev=1129.10 00:31:45.157 clat (usec): min=489, max=104898, avg=30864.59, stdev=24444.99 00:31:45.157 lat (usec): min=513, max=104916, avg=31099.01, stdev=24593.84 00:31:45.157 clat percentiles (usec): 00:31:45.157 | 1.00th=[ 1647], 5.00th=[ 7701], 10.00th=[ 10552], 20.00th=[ 13042], 00:31:45.157 | 30.00th=[ 14353], 40.00th=[ 18482], 50.00th=[ 22152], 60.00th=[ 26608], 00:31:45.157 | 70.00th=[ 34341], 80.00th=[ 44303], 90.00th=[ 77071], 95.00th=[ 88605], 00:31:45.157 | 99.00th=[102237], 99.50th=[104334], 99.90th=[105382], 99.95th=[105382], 00:31:45.157 | 99.99th=[105382] 00:31:45.157 bw ( KiB/s): min= 8192, max=12288, per=15.44%, avg=10240.00, stdev=2896.31, samples=2 00:31:45.157 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:31:45.157 lat (usec) : 500=0.02%, 750=0.15%, 1000=0.06% 00:31:45.157 lat (msec) : 2=0.33%, 4=1.54%, 10=7.50%, 20=36.18%, 50=43.96% 00:31:45.157 lat (msec) : 100=9.65%, 250=0.60% 00:31:45.157 cpu : usr=1.29%, sys=2.98%, ctx=248, majf=0, minf=2 00:31:45.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:45.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.157 issued rwts: total=2238,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.157 00:31:45.157 Run status group 0 (all jobs): 00:31:45.157 READ: bw=59.7MiB/s (62.5MB/s), 8872KiB/s-22.8MiB/s (9085kB/s-23.9MB/s), io=60.2MiB (63.1MB), run=1004-1009msec 00:31:45.157 WRITE: bw=64.8MiB/s (67.9MB/s), 9.91MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=65.3MiB (68.5MB), run=1004-1009msec 00:31:45.157 00:31:45.157 Disk stats (read/write): 00:31:45.157 nvme0n1: ios=2585/3072, merge=0/0, ticks=34217/69065, in_queue=103282, util=97.29% 00:31:45.157 nvme0n2: ios=3121/3442, merge=0/0, ticks=38740/62147, in_queue=100887, util=92.08% 00:31:45.157 nvme0n3: ios=5180/5191, merge=0/0, ticks=48770/46502, in_queue=95272, util=93.86% 00:31:45.157 nvme0n4: ios=2105/2103, merge=0/0, ticks=26721/41869, in_queue=68590, util=94.97% 00:31:45.157 16:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:45.157 [global] 00:31:45.157 thread=1 00:31:45.157 invalidate=1 00:31:45.157 rw=randwrite 00:31:45.157 time_based=1 00:31:45.157 runtime=1 00:31:45.157 ioengine=libaio 00:31:45.157 direct=1 00:31:45.157 bs=4096 00:31:45.157 iodepth=128 00:31:45.157 norandommap=0 00:31:45.157 numjobs=1 00:31:45.157 00:31:45.157 verify_dump=1 00:31:45.157 verify_backlog=512 00:31:45.157 verify_state_save=0 00:31:45.157 do_verify=1 00:31:45.157 verify=crc32c-intel 00:31:45.157 [job0] 00:31:45.157 filename=/dev/nvme0n1 00:31:45.157 [job1] 00:31:45.157 filename=/dev/nvme0n2 00:31:45.157 [job2] 00:31:45.157 filename=/dev/nvme0n3 00:31:45.157 [job3] 00:31:45.157 filename=/dev/nvme0n4 00:31:45.157 Could not set queue depth (nvme0n1) 00:31:45.157 Could not set queue depth (nvme0n2) 00:31:45.157 Could not set queue depth (nvme0n3) 00:31:45.157 Could not set queue depth (nvme0n4) 00:31:45.415 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.415 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.415 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.415 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.415 fio-3.35 00:31:45.415 Starting 4 threads 00:31:46.792 00:31:46.792 job0: (groupid=0, jobs=1): err= 0: pid=2965769: Wed Nov 20 16:24:47 2024 00:31:46.792 read: IOPS=5855, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1005msec) 00:31:46.792 slat (nsec): min=1078, max=9700.0k, avg=82952.17, stdev=495769.47 00:31:46.792 clat (usec): min=503, max=21704, avg=10604.55, stdev=1926.44 00:31:46.792 lat (usec): min=4738, max=21706, avg=10687.50, stdev=1950.68 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 5276], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9503], 00:31:46.792 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:31:46.792 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12780], 95.00th=[13566], 00:31:46.792 | 99.00th=[18220], 99.50th=[19268], 99.90th=[21627], 99.95th=[21627], 00:31:46.792 | 99.99th=[21627] 00:31:46.792 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:31:46.792 slat (nsec): min=1804, max=6179.4k, avg=78816.23, stdev=425399.97 00:31:46.792 clat (usec): min=5553, max=17876, avg=10420.99, stdev=1053.31 00:31:46.792 lat (usec): min=5559, max=17886, avg=10499.81, stdev=1078.67 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 7177], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10028], 00:31:46.792 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:31:46.792 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10945], 95.00th=[11994], 00:31:46.792 | 99.00th=[14615], 99.50th=[14615], 99.90th=[14746], 99.95th=[15139], 00:31:46.792 | 99.99th=[17957] 00:31:46.792 bw ( KiB/s): min=24576, max=24576, per=34.39%, avg=24576.00, stdev= 0.00, samples=2 00:31:46.792 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:31:46.792 lat (usec) : 750=0.01% 00:31:46.792 lat (msec) : 10=26.22%, 20=73.62%, 50=0.15% 00:31:46.792 cpu : usr=3.78%, sys=5.68%, ctx=585, majf=0, minf=1 00:31:46.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.792 issued rwts: total=5885,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.792 job1: (groupid=0, jobs=1): err= 0: pid=2965770: Wed Nov 20 16:24:47 2024 00:31:46.792 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:31:46.792 slat (nsec): min=1508, max=14760k, avg=93419.18, stdev=591325.95 00:31:46.792 clat (usec): min=4366, max=30952, avg=11792.19, stdev=2860.21 00:31:46.792 lat (usec): min=4373, max=30959, avg=11885.61, stdev=2907.69 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10028], 00:31:46.792 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:31:46.792 | 70.00th=[11731], 80.00th=[12780], 90.00th=[16057], 95.00th=[16909], 00:31:46.792 | 99.00th=[24249], 99.50th=[28181], 99.90th=[31065], 99.95th=[31065], 00:31:46.792 | 99.99th=[31065] 00:31:46.792 write: IOPS=4380, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1004msec); 0 zone resets 00:31:46.792 slat (usec): min=2, max=13536, avg=134.53, stdev=822.58 00:31:46.792 clat (usec): min=320, max=134875, avg=17885.28, stdev=21259.12 00:31:46.792 lat (usec): min=850, max=134883, avg=18019.81, stdev=21400.93 00:31:46.792 clat percentiles (msec): 00:31:46.792 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:31:46.792 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:31:46.792 | 70.00th=[ 13], 80.00th=[ 17], 90.00th=[ 32], 95.00th=[ 58], 00:31:46.792 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 136], 00:31:46.792 | 99.99th=[ 136] 00:31:46.792 bw ( KiB/s): min=11368, max=22792, per=23.90%, avg=17080.00, stdev=8077.99, samples=2 00:31:46.792 iops : min= 2842, max= 5698, avg=4270.00, stdev=2019.50, samples=2 00:31:46.792 lat (usec) : 500=0.02%, 1000=0.02% 00:31:46.792 lat (msec) : 2=0.31%, 4=0.48%, 10=16.36%, 20=72.43%, 50=7.04% 00:31:46.792 lat (msec) : 100=1.84%, 250=1.50% 00:31:46.792 cpu : usr=3.49%, sys=4.69%, ctx=434, majf=0, minf=2 00:31:46.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.792 issued rwts: total=4096,4398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.792 job2: (groupid=0, jobs=1): err= 0: pid=2965771: Wed Nov 20 16:24:47 2024 00:31:46.792 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:31:46.792 slat (nsec): min=1352, max=13634k, avg=111821.71, stdev=865738.30 00:31:46.792 clat (usec): min=3743, max=46342, avg=13689.88, stdev=5589.37 00:31:46.792 lat (usec): min=3749, max=46351, avg=13801.70, stdev=5668.69 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 5997], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10552], 00:31:46.792 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12518], 60.00th=[13566], 00:31:46.792 | 70.00th=[14222], 80.00th=[15795], 90.00th=[19530], 95.00th=[22938], 00:31:46.792 | 99.00th=[39060], 99.50th=[43254], 99.90th=[43779], 99.95th=[46400], 00:31:46.792 | 99.99th=[46400] 00:31:46.792 write: IOPS=4373, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1009msec); 0 zone resets 00:31:46.792 slat (usec): min=2, max=11400, avg=115.30, stdev=629.71 00:31:46.792 clat (usec): min=2287, max=50453, avg=16214.33, stdev=10026.55 00:31:46.792 lat (usec): min=2291, max=50460, avg=16329.63, stdev=10094.66 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 4293], 5.00th=[ 7308], 10.00th=[ 8356], 20.00th=[10552], 00:31:46.792 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:31:46.792 | 70.00th=[13960], 80.00th=[24773], 90.00th=[34866], 95.00th=[35390], 00:31:46.792 | 99.00th=[44303], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:31:46.792 | 99.99th=[50594] 00:31:46.792 bw ( KiB/s): min=12168, max=22120, per=23.99%, avg=17144.00, stdev=7037.13, samples=2 00:31:46.792 iops : min= 3042, max= 5530, avg=4286.00, stdev=1759.28, samples=2 00:31:46.792 lat (msec) : 4=0.56%, 10=13.61%, 20=70.91%, 50=14.83%, 100=0.08% 00:31:46.792 cpu : usr=3.47%, sys=4.76%, ctx=443, majf=0, minf=1 00:31:46.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.792 issued rwts: total=4096,4413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.792 job3: (groupid=0, jobs=1): err= 0: pid=2965772: Wed Nov 20 16:24:47 2024 00:31:46.792 read: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec) 00:31:46.792 slat (nsec): min=1119, max=14897k, avg=128507.86, stdev=888112.09 00:31:46.792 clat (usec): min=3236, max=55272, avg=15936.62, stdev=5509.47 00:31:46.792 lat (usec): min=7103, max=56183, avg=16065.13, stdev=5570.49 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 7373], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:31:46.792 | 30.00th=[13042], 40.00th=[14091], 50.00th=[14877], 60.00th=[16909], 00:31:46.792 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[21103], 00:31:46.792 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:31:46.792 | 99.99th=[55313] 00:31:46.792 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:31:46.792 slat (nsec): min=1775, max=25289k, avg=176092.11, stdev=1361308.97 00:31:46.792 clat (usec): min=4788, max=78020, avg=25833.69, stdev=18082.65 00:31:46.792 lat (usec): min=4799, max=78028, avg=26009.78, stdev=18191.66 00:31:46.792 clat percentiles (usec): 00:31:46.792 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[10290], 20.00th=[11469], 00:31:46.792 | 30.00th=[11994], 40.00th=[17171], 50.00th=[17695], 60.00th=[21890], 00:31:46.792 | 70.00th=[30278], 80.00th=[43254], 90.00th=[54264], 95.00th=[65799], 00:31:46.792 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:31:46.792 | 99.99th=[78119] 00:31:46.792 bw ( KiB/s): min= 8752, max=15824, per=17.19%, avg=12288.00, stdev=5000.66, samples=2 00:31:46.792 iops : min= 2188, max= 3956, avg=3072.00, stdev=1250.16, samples=2 00:31:46.792 lat (msec) : 4=0.02%, 10=5.51%, 20=67.03%, 50=20.03%, 100=7.42% 00:31:46.792 cpu : usr=1.89%, sys=3.48%, ctx=199, majf=0, minf=1 00:31:46.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.792 issued rwts: total=3009,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.792 00:31:46.792 Run status group 0 (all jobs): 00:31:46.792 READ: bw=66.1MiB/s (69.4MB/s), 11.7MiB/s-22.9MiB/s (12.3MB/s-24.0MB/s), io=66.7MiB (70.0MB), run=1004-1009msec 00:31:46.792 WRITE: bw=69.8MiB/s (73.2MB/s), 11.9MiB/s-23.9MiB/s (12.5MB/s-25.0MB/s), io=70.4MiB (73.8MB), run=1004-1009msec 00:31:46.792 00:31:46.792 Disk stats (read/write): 00:31:46.792 nvme0n1: ios=4653/4950, merge=0/0, ticks=19563/19287, in_queue=38850, util=95.79% 00:31:46.792 nvme0n2: ios=3094/3079, merge=0/0, ticks=22867/52207, in_queue=75074, util=97.33% 00:31:46.792 nvme0n3: ios=3641/3790, merge=0/0, ticks=46270/51749, in_queue=98019, util=97.40% 00:31:46.792 nvme0n4: ios=2417/2560, merge=0/0, ticks=21488/31561, in_queue=53049, util=88.55% 00:31:46.792 16:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:46.792 16:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2965986 00:31:46.793 16:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:46.793 16:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:46.793 [global] 00:31:46.793 thread=1 00:31:46.793 invalidate=1 00:31:46.793 rw=read 00:31:46.793 time_based=1 00:31:46.793 runtime=10 00:31:46.793 ioengine=libaio 00:31:46.793 direct=1 00:31:46.793 bs=4096 00:31:46.793 iodepth=1 00:31:46.793 norandommap=1 00:31:46.793 numjobs=1 00:31:46.793 00:31:46.793 [job0] 00:31:46.793 filename=/dev/nvme0n1 00:31:46.793 [job1] 00:31:46.793 filename=/dev/nvme0n2 00:31:46.793 [job2] 00:31:46.793 filename=/dev/nvme0n3 00:31:46.793 [job3] 00:31:46.793 filename=/dev/nvme0n4 00:31:46.793 Could not set queue depth (nvme0n1) 00:31:46.793 Could not set queue depth (nvme0n2) 00:31:46.793 Could not set queue depth (nvme0n3) 00:31:46.793 Could not set queue depth (nvme0n4) 00:31:47.062 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.062 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.062 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.062 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.062 fio-3.35 00:31:47.062 Starting 4 threads 00:31:49.596 16:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:49.854 16:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:49.854 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27140096, buflen=4096 00:31:49.854 fio: pid=2966139, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:50.113 16:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.113 16:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:50.113 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:31:50.113 fio: pid=2966138, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:50.373 16:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.373 16:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:50.373 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=17698816, buflen=4096 00:31:50.373 fio: pid=2966134, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:50.373 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.373 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:50.633 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=11141120, buflen=4096 00:31:50.633 fio: pid=2966137, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:31:50.633 00:31:50.633 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2966134: Wed Nov 20 16:24:51 2024 00:31:50.633 read: IOPS=1370, BW=5480KiB/s (5612kB/s)(16.9MiB/3154msec) 00:31:50.633 slat (usec): min=6, max=20730, avg=14.96, stdev=369.27 00:31:50.633 clat (usec): min=196, max=41909, avg=707.92, stdev=4136.74 00:31:50.633 lat (usec): min=203, max=62046, avg=722.88, stdev=4229.37 00:31:50.633 clat percentiles (usec): 00:31:50.633 | 1.00th=[ 223], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 277], 00:31:50.633 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 285], 00:31:50.633 | 70.00th=[ 289], 80.00th=[ 289], 90.00th=[ 293], 95.00th=[ 297], 00:31:50.633 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:50.633 | 99.99th=[41681] 00:31:50.633 bw ( KiB/s): min= 93, max=13712, per=35.24%, avg=5756.83, stdev=6443.48, samples=6 00:31:50.633 iops : min= 23, max= 3428, avg=1439.17, stdev=1610.92, samples=6 00:31:50.633 lat (usec) : 250=2.71%, 500=96.18%, 750=0.05% 00:31:50.633 lat (msec) : 50=1.04% 00:31:50.633 cpu : usr=0.44%, sys=1.17%, ctx=4324, majf=0, minf=1 00:31:50.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.633 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.633 issued rwts: total=4322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.633 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2966137: Wed Nov 20 16:24:51 2024 00:31:50.633 read: IOPS=808, BW=3233KiB/s (3311kB/s)(10.6MiB/3365msec) 00:31:50.633 slat (usec): min=7, max=6751, avg=15.55, stdev=208.92 00:31:50.633 clat (usec): min=166, max=43061, avg=1209.60, stdev=6195.52 00:31:50.633 lat (usec): min=175, max=47923, avg=1222.81, stdev=6229.76 00:31:50.633 clat percentiles (usec): 00:31:50.633 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:31:50.633 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:31:50.633 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 260], 00:31:50.633 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:50.633 | 99.99th=[43254] 00:31:50.633 bw ( KiB/s): min= 93, max=15504, per=22.13%, avg=3615.50, stdev=6254.93, samples=6 00:31:50.633 iops : min= 23, max= 3876, avg=903.83, stdev=1563.76, samples=6 00:31:50.633 lat (usec) : 250=68.32%, 500=29.29% 00:31:50.633 lat (msec) : 50=2.35% 00:31:50.633 cpu : usr=0.65%, sys=1.37%, ctx=2724, majf=0, minf=1 00:31:50.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.633 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.633 issued rwts: total=2721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.633 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2966138: Wed Nov 20 16:24:51 2024 00:31:50.633 read: IOPS=24, BW=97.8KiB/s (100kB/s)(288KiB/2945msec) 00:31:50.633 slat (usec): min=10, max=12882, avg=199.44, stdev=1505.03 00:31:50.633 clat (usec): min=291, max=41118, avg=40400.87, stdev=4794.05 00:31:50.633 lat (usec): min=328, max=54000, avg=40602.78, stdev=5052.19 00:31:50.633 clat percentiles (usec): 00:31:50.633 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:50.633 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:50.633 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:50.633 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:50.633 | 99.99th=[41157] 00:31:50.633 bw ( KiB/s): min= 96, max= 104, per=0.61%, avg=99.20, stdev= 4.38, samples=5 00:31:50.633 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:31:50.633 lat (usec) : 500=1.37% 00:31:50.633 lat (msec) : 50=97.26% 00:31:50.633 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=2 00:31:50.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.633 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.633 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.633 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2966139: Wed Nov 20 16:24:51 2024 00:31:50.633 read: IOPS=2417, BW=9669KiB/s (9902kB/s)(25.9MiB/2741msec) 00:31:50.633 slat (nsec): min=6561, max=32938, avg=8010.91, stdev=1546.52 00:31:50.633 clat (usec): min=181, max=42005, avg=401.20, stdev=2290.77 00:31:50.633 lat (usec): min=188, max=42028, avg=409.21, stdev=2291.55 00:31:50.633 clat percentiles (usec): 00:31:50.634 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 265], 00:31:50.634 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:31:50.634 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 297], 95.00th=[ 302], 00:31:50.634 | 99.00th=[ 318], 99.50th=[ 396], 99.90th=[41157], 99.95th=[41157], 00:31:50.634 | 99.99th=[42206] 00:31:50.634 bw ( KiB/s): min= 104, max=15920, per=64.41%, avg=10520.00, stdev=6219.68, samples=5 00:31:50.634 iops : min= 26, max= 3980, avg=2630.00, stdev=1554.92, samples=5 00:31:50.634 lat (usec) : 250=17.70%, 500=81.92%, 750=0.03%, 1000=0.02% 00:31:50.634 lat (msec) : 50=0.32% 00:31:50.634 cpu : usr=0.58%, sys=2.37%, ctx=6627, majf=0, minf=2 00:31:50.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.634 issued rwts: total=6627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.634 00:31:50.634 Run status group 0 (all jobs): 00:31:50.634 READ: bw=15.9MiB/s (16.7MB/s), 97.8KiB/s-9669KiB/s (100kB/s-9902kB/s), io=53.7MiB (56.3MB), run=2741-3365msec 00:31:50.634 00:31:50.634 Disk stats (read/write): 00:31:50.634 nvme0n1: ios=4320/0, merge=0/0, ticks=3006/0, in_queue=3006, util=94.70% 00:31:50.634 nvme0n2: ios=2757/0, merge=0/0, ticks=3899/0, in_queue=3899, util=98.05% 00:31:50.634 nvme0n3: ios=70/0, merge=0/0, ticks=2829/0, in_queue=2829, util=96.11% 00:31:50.634 nvme0n4: ios=6623/0, merge=0/0, ticks=2483/0, in_queue=2483, util=96.41% 00:31:50.634 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.634 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:50.893 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.893 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:51.152 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:51.152 16:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:51.412 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:51.412 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:51.412 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:51.412 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2965986 00:31:51.412 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:51.412 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:51.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:51.671 nvmf hotplug test: fio failed as expected 00:31:51.671 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.931 rmmod nvme_tcp 00:31:51.931 rmmod nvme_fabrics 00:31:51.931 rmmod nvme_keyring 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2963317 ']' 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2963317 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2963317 ']' 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2963317 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2963317 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2963317' 00:31:51.931 killing process with pid 2963317 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2963317 00:31:51.931 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2963317 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.191 16:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.729 00:31:54.729 real 0m25.912s 00:31:54.729 user 1m30.210s 00:31:54.729 sys 0m10.984s 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 ************************************ 00:31:54.729 END TEST nvmf_fio_target 00:31:54.729 ************************************ 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.729 16:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 ************************************ 00:31:54.729 START TEST nvmf_bdevio 00:31:54.729 ************************************ 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:54.729 * Looking for test storage... 00:31:54.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.729 --rc genhtml_branch_coverage=1 00:31:54.729 --rc genhtml_function_coverage=1 00:31:54.729 --rc genhtml_legend=1 00:31:54.729 --rc geninfo_all_blocks=1 00:31:54.729 --rc geninfo_unexecuted_blocks=1 00:31:54.729 00:31:54.729 ' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.729 --rc genhtml_branch_coverage=1 00:31:54.729 --rc genhtml_function_coverage=1 00:31:54.729 --rc genhtml_legend=1 00:31:54.729 --rc geninfo_all_blocks=1 00:31:54.729 --rc geninfo_unexecuted_blocks=1 00:31:54.729 00:31:54.729 ' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.729 --rc genhtml_branch_coverage=1 00:31:54.729 --rc genhtml_function_coverage=1 00:31:54.729 --rc genhtml_legend=1 00:31:54.729 --rc geninfo_all_blocks=1 00:31:54.729 --rc geninfo_unexecuted_blocks=1 00:31:54.729 00:31:54.729 ' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:54.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.729 --rc genhtml_branch_coverage=1 00:31:54.729 --rc genhtml_function_coverage=1 00:31:54.729 --rc genhtml_legend=1 00:31:54.729 --rc geninfo_all_blocks=1 00:31:54.729 --rc geninfo_unexecuted_blocks=1 00:31:54.729 00:31:54.729 ' 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.729 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.730 16:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:00.007 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:00.007 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.007 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:00.008 Found net devices under 0000:86:00.0: cvl_0_0 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:00.008 Found net devices under 0000:86:00.1: cvl_0_1 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.008 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.268 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.268 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.268 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.268 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.268 16:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:32:00.268 00:32:00.268 --- 10.0.0.2 ping statistics --- 00:32:00.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.268 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:32:00.268 00:32:00.268 --- 10.0.0.1 ping statistics --- 00:32:00.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.268 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.268 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2970374 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2970374 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2970374 ']' 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.528 [2024-11-20 16:25:01.153613] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.528 [2024-11-20 16:25:01.154557] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:32:00.528 [2024-11-20 16:25:01.154590] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.528 [2024-11-20 16:25:01.217310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.528 [2024-11-20 16:25:01.259898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.528 [2024-11-20 16:25:01.259936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.528 [2024-11-20 16:25:01.259943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.528 [2024-11-20 16:25:01.259963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.528 [2024-11-20 16:25:01.259968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.528 [2024-11-20 16:25:01.261376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:00.528 [2024-11-20 16:25:01.261473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:00.528 [2024-11-20 16:25:01.261579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.528 [2024-11-20 16:25:01.261579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:00.528 [2024-11-20 16:25:01.329682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.528 [2024-11-20 16:25:01.330243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.528 [2024-11-20 16:25:01.330745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.528 [2024-11-20 16:25:01.330963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.528 [2024-11-20 16:25:01.331083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.528 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.787 [2024-11-20 16:25:01.406366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.787 Malloc0 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.787 [2024-11-20 16:25:01.494374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.787 { 00:32:00.787 "params": { 00:32:00.787 "name": "Nvme$subsystem", 00:32:00.787 "trtype": "$TEST_TRANSPORT", 00:32:00.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.787 "adrfam": "ipv4", 00:32:00.787 "trsvcid": "$NVMF_PORT", 00:32:00.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.787 "hdgst": ${hdgst:-false}, 00:32:00.787 "ddgst": ${ddgst:-false} 00:32:00.787 }, 00:32:00.787 "method": "bdev_nvme_attach_controller" 00:32:00.787 } 00:32:00.787 EOF 00:32:00.787 )") 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:00.787 16:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:00.787 "params": { 00:32:00.787 "name": "Nvme1", 00:32:00.787 "trtype": "tcp", 00:32:00.787 "traddr": "10.0.0.2", 00:32:00.787 "adrfam": "ipv4", 00:32:00.787 "trsvcid": "4420", 00:32:00.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.787 "hdgst": false, 00:32:00.787 "ddgst": false 00:32:00.787 }, 00:32:00.787 "method": "bdev_nvme_attach_controller" 00:32:00.787 }' 00:32:00.787 [2024-11-20 16:25:01.546081] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:32:00.787 [2024-11-20 16:25:01.546131] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970403 ] 00:32:01.047 [2024-11-20 16:25:01.621721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:01.047 [2024-11-20 16:25:01.666305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.047 [2024-11-20 16:25:01.666411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.047 [2024-11-20 16:25:01.666412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.305 I/O targets: 00:32:01.305 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:01.305 00:32:01.305 00:32:01.305 CUnit - A unit testing framework for C - Version 2.1-3 00:32:01.305 http://cunit.sourceforge.net/ 00:32:01.305 00:32:01.305 00:32:01.305 Suite: bdevio tests on: Nvme1n1 00:32:01.305 Test: blockdev write read block ...passed 00:32:01.306 Test: blockdev write zeroes read block ...passed 00:32:01.306 Test: blockdev write zeroes read no split ...passed 00:32:01.306 Test: blockdev write zeroes read split ...passed 00:32:01.306 Test: blockdev write zeroes read split partial ...passed 00:32:01.306 Test: blockdev reset ...[2024-11-20 16:25:02.049975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:01.306 [2024-11-20 16:25:02.050043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5340 (9): Bad file descriptor 00:32:01.306 [2024-11-20 16:25:02.093897] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:01.306 passed 00:32:01.306 Test: blockdev write read 8 blocks ...passed 00:32:01.306 Test: blockdev write read size > 128k ...passed 00:32:01.306 Test: blockdev write read invalid size ...passed 00:32:01.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:01.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:01.564 Test: blockdev write read max offset ...passed 00:32:01.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:01.564 Test: blockdev writev readv 8 blocks ...passed 00:32:01.564 Test: blockdev writev readv 30 x 1block ...passed 00:32:01.564 Test: blockdev writev readv block ...passed 00:32:01.564 Test: blockdev writev readv size > 128k ...passed 00:32:01.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:01.564 Test: blockdev comparev and writev ...[2024-11-20 16:25:02.345954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.345983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.345997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.346294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.346317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.346607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.346630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.346922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:01.564 [2024-11-20 16:25:02.346951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:01.564 [2024-11-20 16:25:02.346963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:01.564 passed 00:32:01.823 Test: blockdev nvme passthru rw ...passed 00:32:01.823 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:25:02.429360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:01.823 [2024-11-20 16:25:02.429376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:01.823 [2024-11-20 16:25:02.429485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:01.823 [2024-11-20 16:25:02.429495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:01.823 [2024-11-20 16:25:02.429603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:01.823 [2024-11-20 16:25:02.429613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:01.823 [2024-11-20 16:25:02.429721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:01.823 [2024-11-20 16:25:02.429730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:01.823 passed 00:32:01.823 Test: blockdev nvme admin passthru ...passed 00:32:01.823 Test: blockdev copy ...passed 00:32:01.823 00:32:01.823 Run Summary: Type Total Ran Passed Failed Inactive 00:32:01.823 suites 1 1 n/a 0 0 00:32:01.823 tests 23 23 23 0 0 00:32:01.823 asserts 152 152 152 0 n/a 00:32:01.823 00:32:01.823 Elapsed time = 1.169 seconds 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.823 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.823 rmmod nvme_tcp 00:32:01.823 rmmod nvme_fabrics 00:32:02.082 rmmod nvme_keyring 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2970374 ']' 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2970374 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2970374 ']' 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2970374 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970374 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970374' 00:32:02.082 killing process with pid 2970374 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2970374 00:32:02.082 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2970374 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.340 16:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.246 16:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.246 00:32:04.246 real 0m9.967s 00:32:04.246 user 0m9.117s 00:32:04.246 sys 0m5.168s 00:32:04.246 16:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.246 16:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:04.246 ************************************ 00:32:04.246 END TEST nvmf_bdevio 00:32:04.246 ************************************ 00:32:04.246 16:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:04.246 00:32:04.246 real 4m33.122s 00:32:04.246 user 9m11.840s 00:32:04.246 sys 1m51.368s 00:32:04.246 16:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.246 16:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:04.246 ************************************ 00:32:04.246 END TEST nvmf_target_core_interrupt_mode 00:32:04.246 ************************************ 00:32:04.246 16:25:05 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:04.246 16:25:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:04.246 16:25:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.246 16:25:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:04.505 ************************************ 00:32:04.505 START TEST nvmf_interrupt 00:32:04.505 ************************************ 00:32:04.505 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:04.505 * Looking for test storage... 00:32:04.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:04.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.506 --rc genhtml_branch_coverage=1 00:32:04.506 --rc genhtml_function_coverage=1 00:32:04.506 --rc genhtml_legend=1 00:32:04.506 --rc geninfo_all_blocks=1 00:32:04.506 --rc geninfo_unexecuted_blocks=1 00:32:04.506 00:32:04.506 ' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:04.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.506 --rc genhtml_branch_coverage=1 00:32:04.506 --rc genhtml_function_coverage=1 00:32:04.506 --rc genhtml_legend=1 00:32:04.506 --rc geninfo_all_blocks=1 00:32:04.506 --rc geninfo_unexecuted_blocks=1 00:32:04.506 00:32:04.506 ' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:04.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.506 --rc genhtml_branch_coverage=1 00:32:04.506 --rc genhtml_function_coverage=1 00:32:04.506 --rc genhtml_legend=1 00:32:04.506 --rc geninfo_all_blocks=1 00:32:04.506 --rc geninfo_unexecuted_blocks=1 00:32:04.506 00:32:04.506 ' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:04.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.506 --rc genhtml_branch_coverage=1 00:32:04.506 --rc genhtml_function_coverage=1 00:32:04.506 --rc genhtml_legend=1 00:32:04.506 --rc geninfo_all_blocks=1 00:32:04.506 --rc geninfo_unexecuted_blocks=1 00:32:04.506 00:32:04.506 ' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.506 16:25:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:04.507 16:25:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:11.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:11.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.076 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:11.077 Found net devices under 0000:86:00.0: cvl_0_0 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:11.077 Found net devices under 0000:86:00.1: cvl_0_1 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.077 16:25:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:11.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:32:11.077 00:32:11.077 --- 10.0.0.2 ping statistics --- 00:32:11.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.077 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:32:11.077 00:32:11.077 --- 10.0.0.1 ping statistics --- 00:32:11.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.077 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2974163 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2974163 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2974163 ']' 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.077 16:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.077 [2024-11-20 16:25:11.273470] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:11.077 [2024-11-20 16:25:11.274462] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:32:11.077 [2024-11-20 16:25:11.274502] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.077 [2024-11-20 16:25:11.366325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:11.077 [2024-11-20 16:25:11.407678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.077 [2024-11-20 16:25:11.407716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.077 [2024-11-20 16:25:11.407723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.077 [2024-11-20 16:25:11.407730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.077 [2024-11-20 16:25:11.407735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.077 [2024-11-20 16:25:11.408926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.077 [2024-11-20 16:25:11.408927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.077 [2024-11-20 16:25:11.477260] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:11.077 [2024-11-20 16:25:11.477833] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:11.077 [2024-11-20 16:25:11.478054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:11.337 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:11.596 5000+0 records in 00:32:11.596 5000+0 records out 00:32:11.596 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188509 s, 543 MB/s 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.596 AIO0 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.596 [2024-11-20 16:25:12.221728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.596 [2024-11-20 16:25:12.262106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2974163 0 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2974163 0 idle 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:11.596 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:11.855 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974163 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.27 reactor_0' 00:32:11.855 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974163 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.27 reactor_0 00:32:11.855 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:11.855 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:11.855 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2974163 1 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2974163 1 idle 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974167 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974167 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2974425 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2974163 0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2974163 0 busy 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:11.856 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974163 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.46 reactor_0' 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974163 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.46 reactor_0 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2974163 1 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2974163 1 busy 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:12.115 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:12.374 16:25:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974167 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.27 reactor_1' 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974167 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.27 reactor_1 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:12.374 16:25:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2974425 00:32:22.370 Initializing NVMe Controllers 00:32:22.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.370 Controller IO queue size 256, less than required. 00:32:22.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:22.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:22.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:22.370 Initialization complete. Launching workers. 00:32:22.370 ======================================================== 00:32:22.370 Latency(us) 00:32:22.370 Device Information : IOPS MiB/s Average min max 00:32:22.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16533.99 64.59 15491.92 3180.54 55391.96 00:32:22.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16234.59 63.42 15775.53 7398.80 57886.80 00:32:22.370 ======================================================== 00:32:22.370 Total : 32768.59 128.00 15632.43 3180.54 57886.80 00:32:22.370 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2974163 0 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2974163 0 idle 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974163 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:20.27 reactor_0' 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974163 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:20.27 reactor_0 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:22.370 16:25:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2974163 1 00:32:22.370 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2974163 1 idle 00:32:22.370 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:22.370 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:22.370 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:22.370 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:22.370 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974167 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974167 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:22.371 16:25:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:22.939 16:25:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:22.939 16:25:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:22.939 16:25:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:22.939 16:25:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:22.939 16:25:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2974163 0 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2974163 0 idle 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:24.844 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:24.845 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974163 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.48 reactor_0' 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974163 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.48 reactor_0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2974163 1 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2974163 1 idle 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2974163 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2974163 -w 256 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2974167 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2974167 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:25.103 16:25:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:25.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.362 rmmod nvme_tcp 00:32:25.362 rmmod nvme_fabrics 00:32:25.362 rmmod nvme_keyring 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2974163 ']' 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2974163 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2974163 ']' 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2974163 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.362 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2974163 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2974163' 00:32:25.621 killing process with pid 2974163 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2974163 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2974163 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:25.621 16:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.155 16:25:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.155 00:32:28.155 real 0m23.384s 00:32:28.155 user 0m39.750s 00:32:28.155 sys 0m8.523s 00:32:28.155 16:25:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.155 16:25:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:28.155 ************************************ 00:32:28.155 END TEST nvmf_interrupt 00:32:28.155 ************************************ 00:32:28.155 00:32:28.155 real 27m32.517s 00:32:28.155 user 57m4.776s 00:32:28.155 sys 9m18.251s 00:32:28.155 16:25:28 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.155 16:25:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.155 ************************************ 00:32:28.155 END TEST nvmf_tcp 00:32:28.155 ************************************ 00:32:28.155 16:25:28 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:28.155 16:25:28 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:28.155 16:25:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:28.155 16:25:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.155 16:25:28 -- common/autotest_common.sh@10 -- # set +x 00:32:28.155 ************************************ 00:32:28.155 START TEST spdkcli_nvmf_tcp 00:32:28.155 ************************************ 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:28.155 * Looking for test storage... 00:32:28.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.155 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:28.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.155 --rc genhtml_branch_coverage=1 00:32:28.155 --rc genhtml_function_coverage=1 00:32:28.155 --rc genhtml_legend=1 00:32:28.156 --rc geninfo_all_blocks=1 00:32:28.156 --rc geninfo_unexecuted_blocks=1 00:32:28.156 00:32:28.156 ' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:28.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.156 --rc genhtml_branch_coverage=1 00:32:28.156 --rc genhtml_function_coverage=1 00:32:28.156 --rc genhtml_legend=1 00:32:28.156 --rc geninfo_all_blocks=1 00:32:28.156 --rc geninfo_unexecuted_blocks=1 00:32:28.156 00:32:28.156 ' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:28.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.156 --rc genhtml_branch_coverage=1 00:32:28.156 --rc genhtml_function_coverage=1 00:32:28.156 --rc genhtml_legend=1 00:32:28.156 --rc geninfo_all_blocks=1 00:32:28.156 --rc geninfo_unexecuted_blocks=1 00:32:28.156 00:32:28.156 ' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:28.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.156 --rc genhtml_branch_coverage=1 00:32:28.156 --rc genhtml_function_coverage=1 00:32:28.156 --rc genhtml_legend=1 00:32:28.156 --rc geninfo_all_blocks=1 00:32:28.156 --rc geninfo_unexecuted_blocks=1 00:32:28.156 00:32:28.156 ' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:28.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2977110 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2977110 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2977110 ']' 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.156 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.157 16:25:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.157 [2024-11-20 16:25:28.856119] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:32:28.157 [2024-11-20 16:25:28.856172] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977110 ] 00:32:28.157 [2024-11-20 16:25:28.931872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:28.157 [2024-11-20 16:25:28.976626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.157 [2024-11-20 16:25:28.976627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.417 16:25:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:28.417 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:28.417 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:28.417 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:28.417 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:28.417 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:28.417 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:28.417 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:28.417 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:28.417 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:28.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:28.417 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:28.417 ' 00:32:31.110 [2024-11-20 16:25:31.806018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.486 [2024-11-20 16:25:33.150487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:35.020 [2024-11-20 16:25:35.634159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:37.583 [2024-11-20 16:25:37.817032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:38.967 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:38.967 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:38.967 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:38.967 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:38.967 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:38.967 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:38.967 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:38.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:38.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:38.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:38.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:38.967 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:38.967 16:25:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:39.226 16:25:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:39.226 16:25:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:39.226 16:25:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:39.226 16:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.226 16:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.485 16:25:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:39.485 16:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.485 16:25:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.485 16:25:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:39.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:39.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:39.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:39.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:39.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:39.485 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:39.485 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:39.485 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:39.485 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:39.485 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:39.485 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:39.485 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:39.485 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:39.485 ' 00:32:44.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:44.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:44.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:44.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:44.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:44.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:44.758 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:44.758 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:44.758 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:44.758 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:44.759 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:44.759 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:44.759 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:44.759 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2977110 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2977110 ']' 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2977110 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977110 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977110' 00:32:45.018 killing process with pid 2977110 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2977110 00:32:45.018 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2977110 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2977110 ']' 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2977110 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2977110 ']' 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2977110 00:32:45.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2977110) - No such process 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2977110 is not found' 00:32:45.278 Process with pid 2977110 is not found 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:45.278 00:32:45.278 real 0m17.367s 00:32:45.278 user 0m38.268s 00:32:45.278 sys 0m0.805s 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.278 16:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.278 ************************************ 00:32:45.278 END TEST spdkcli_nvmf_tcp 00:32:45.278 ************************************ 00:32:45.278 16:25:46 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:45.278 16:25:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:45.278 16:25:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.278 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:32:45.278 ************************************ 00:32:45.278 START TEST nvmf_identify_passthru 00:32:45.278 ************************************ 00:32:45.278 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:45.538 * Looking for test storage... 00:32:45.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:45.538 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.538 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.538 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.538 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.538 16:25:46 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.538 16:25:46 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.538 16:25:46 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.538 16:25:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:45.539 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.539 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.539 --rc genhtml_branch_coverage=1 00:32:45.539 --rc genhtml_function_coverage=1 00:32:45.539 --rc genhtml_legend=1 00:32:45.539 --rc geninfo_all_blocks=1 00:32:45.539 --rc geninfo_unexecuted_blocks=1 00:32:45.539 00:32:45.539 ' 00:32:45.539 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.539 --rc genhtml_branch_coverage=1 00:32:45.539 --rc genhtml_function_coverage=1 00:32:45.539 --rc genhtml_legend=1 00:32:45.539 --rc geninfo_all_blocks=1 00:32:45.539 --rc geninfo_unexecuted_blocks=1 00:32:45.539 00:32:45.539 ' 00:32:45.539 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.539 --rc genhtml_branch_coverage=1 00:32:45.539 --rc genhtml_function_coverage=1 00:32:45.539 --rc genhtml_legend=1 00:32:45.539 --rc geninfo_all_blocks=1 00:32:45.539 --rc geninfo_unexecuted_blocks=1 00:32:45.539 00:32:45.539 ' 00:32:45.539 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.539 --rc genhtml_branch_coverage=1 00:32:45.539 --rc genhtml_function_coverage=1 00:32:45.539 --rc genhtml_legend=1 00:32:45.539 --rc geninfo_all_blocks=1 00:32:45.539 --rc geninfo_unexecuted_blocks=1 00:32:45.539 00:32:45.539 ' 00:32:45.539 16:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.539 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.539 16:25:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.540 16:25:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.540 16:25:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.541 16:25:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.541 16:25:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.541 16:25:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.541 16:25:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:45.542 16:25:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.542 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.545 16:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.546 16:25:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.546 16:25:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.546 16:25:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.546 16:25:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.547 16:25:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.548 16:25:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.549 16:25:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.549 16:25:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:45.549 16:25:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.549 16:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:45.550 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.550 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.550 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.550 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.551 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.551 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.551 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:45.551 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.551 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.551 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.551 16:25:46 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.551 16:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:52.125 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:52.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:52.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:52.126 Found net devices under 0000:86:00.0: cvl_0_0 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:52.126 Found net devices under 0000:86:00.1: cvl_0_1 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.126 16:25:51 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.126 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.126 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.126 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.126 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:32:52.127 00:32:52.127 --- 10.0.0.2 ping statistics --- 00:32:52.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.127 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:32:52.127 00:32:52.127 --- 10.0.0.1 ping statistics --- 00:32:52.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.127 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.127 16:25:52 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:52.127 16:25:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:52.127 16:25:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:56.320 16:25:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:56.320 16:25:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:56.320 16:25:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:56.320 16:25:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2984366 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2984366 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2984366 ']' 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.512 [2024-11-20 16:26:00.604507] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:33:00.512 [2024-11-20 16:26:00.604554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.512 [2024-11-20 16:26:00.681685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.512 [2024-11-20 16:26:00.725468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.512 [2024-11-20 16:26:00.725508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.512 [2024-11-20 16:26:00.725517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.512 [2024-11-20 16:26:00.725524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.512 [2024-11-20 16:26:00.725529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.512 [2024-11-20 16:26:00.728967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.512 [2024-11-20 16:26:00.729004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.512 [2024-11-20 16:26:00.729112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.512 [2024-11-20 16:26:00.729112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.512 INFO: Log level set to 20 00:33:00.512 INFO: Requests: 00:33:00.512 { 00:33:00.512 "jsonrpc": "2.0", 00:33:00.512 "method": "nvmf_set_config", 00:33:00.512 "id": 1, 00:33:00.512 "params": { 00:33:00.512 "admin_cmd_passthru": { 00:33:00.512 "identify_ctrlr": true 00:33:00.512 } 00:33:00.512 } 00:33:00.512 } 00:33:00.512 00:33:00.512 INFO: response: 00:33:00.512 { 00:33:00.512 "jsonrpc": "2.0", 00:33:00.512 "id": 1, 00:33:00.512 "result": true 00:33:00.512 } 00:33:00.512 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.512 INFO: Setting log level to 20 00:33:00.512 INFO: Setting log level to 20 00:33:00.512 INFO: Log level set to 20 00:33:00.512 INFO: Log level set to 20 00:33:00.512 INFO: Requests: 00:33:00.512 { 00:33:00.512 "jsonrpc": "2.0", 00:33:00.512 "method": "framework_start_init", 00:33:00.512 "id": 1 00:33:00.512 } 00:33:00.512 00:33:00.512 INFO: Requests: 00:33:00.512 { 00:33:00.512 "jsonrpc": "2.0", 00:33:00.512 "method": "framework_start_init", 00:33:00.512 "id": 1 00:33:00.512 } 00:33:00.512 00:33:00.512 [2024-11-20 16:26:00.865668] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:00.512 INFO: response: 00:33:00.512 { 00:33:00.512 "jsonrpc": "2.0", 00:33:00.512 "id": 1, 00:33:00.512 "result": true 00:33:00.512 } 00:33:00.512 00:33:00.512 INFO: response: 00:33:00.512 { 00:33:00.512 "jsonrpc": "2.0", 00:33:00.512 "id": 1, 00:33:00.512 "result": true 00:33:00.512 } 00:33:00.512 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.512 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.512 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.512 INFO: Setting log level to 40 00:33:00.512 INFO: Setting log level to 40 00:33:00.512 INFO: Setting log level to 40 00:33:00.512 [2024-11-20 16:26:00.879009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.513 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.513 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:00.513 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.513 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:00.513 16:26:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:00.513 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.513 16:26:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.048 Nvme0n1 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.048 [2024-11-20 16:26:03.788668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.048 [ 00:33:03.048 { 00:33:03.048 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:03.048 "subtype": "Discovery", 00:33:03.048 "listen_addresses": [], 00:33:03.048 "allow_any_host": true, 00:33:03.048 "hosts": [] 00:33:03.048 }, 00:33:03.048 { 00:33:03.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:03.048 "subtype": "NVMe", 00:33:03.048 "listen_addresses": [ 00:33:03.048 { 00:33:03.048 "trtype": "TCP", 00:33:03.048 "adrfam": "IPv4", 00:33:03.048 "traddr": "10.0.0.2", 00:33:03.048 "trsvcid": "4420" 00:33:03.048 } 00:33:03.048 ], 00:33:03.048 "allow_any_host": true, 00:33:03.048 "hosts": [], 00:33:03.048 "serial_number": "SPDK00000000000001", 00:33:03.048 "model_number": "SPDK bdev Controller", 00:33:03.048 "max_namespaces": 1, 00:33:03.048 "min_cntlid": 1, 00:33:03.048 "max_cntlid": 65519, 00:33:03.048 "namespaces": [ 00:33:03.048 { 00:33:03.048 "nsid": 1, 00:33:03.048 "bdev_name": "Nvme0n1", 00:33:03.048 "name": "Nvme0n1", 00:33:03.048 "nguid": "762CB995B606403D8B6D520AC66FA34C", 00:33:03.048 "uuid": "762cb995-b606-403d-8b6d-520ac66fa34c" 00:33:03.048 } 00:33:03.048 ] 00:33:03.048 } 00:33:03.048 ] 00:33:03.048 16:26:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:03.048 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:03.307 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:03.307 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:03.307 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:03.307 16:26:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:03.566 16:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:03.566 16:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:03.566 16:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:03.566 16:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.566 16:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:03.566 16:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.566 rmmod nvme_tcp 00:33:03.566 rmmod nvme_fabrics 00:33:03.566 rmmod nvme_keyring 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2984366 ']' 00:33:03.566 16:26:04 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2984366 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2984366 ']' 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2984366 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.566 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2984366 00:33:03.825 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.825 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.825 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2984366' 00:33:03.825 killing process with pid 2984366 00:33:03.825 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2984366 00:33:03.825 16:26:04 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2984366 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.202 16:26:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.202 16:26:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.202 16:26:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.108 16:26:07 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.108 00:33:07.108 real 0m21.874s 00:33:07.108 user 0m27.048s 00:33:07.108 sys 0m6.239s 00:33:07.108 16:26:07 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.108 16:26:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.108 ************************************ 00:33:07.108 END TEST nvmf_identify_passthru 00:33:07.108 ************************************ 00:33:07.367 16:26:07 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:07.367 16:26:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:07.367 16:26:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.367 16:26:07 -- common/autotest_common.sh@10 -- # set +x 00:33:07.367 ************************************ 00:33:07.367 START TEST nvmf_dif 00:33:07.368 ************************************ 00:33:07.368 16:26:07 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:07.368 * Looking for test storage... 00:33:07.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.368 --rc genhtml_branch_coverage=1 00:33:07.368 --rc genhtml_function_coverage=1 00:33:07.368 --rc genhtml_legend=1 00:33:07.368 --rc geninfo_all_blocks=1 00:33:07.368 --rc geninfo_unexecuted_blocks=1 00:33:07.368 00:33:07.368 ' 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.368 --rc genhtml_branch_coverage=1 00:33:07.368 --rc genhtml_function_coverage=1 00:33:07.368 --rc genhtml_legend=1 00:33:07.368 --rc geninfo_all_blocks=1 00:33:07.368 --rc geninfo_unexecuted_blocks=1 00:33:07.368 00:33:07.368 ' 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.368 --rc genhtml_branch_coverage=1 00:33:07.368 --rc genhtml_function_coverage=1 00:33:07.368 --rc genhtml_legend=1 00:33:07.368 --rc geninfo_all_blocks=1 00:33:07.368 --rc geninfo_unexecuted_blocks=1 00:33:07.368 00:33:07.368 ' 00:33:07.368 16:26:08 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:07.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.368 --rc genhtml_branch_coverage=1 00:33:07.368 --rc genhtml_function_coverage=1 00:33:07.368 --rc genhtml_legend=1 00:33:07.368 --rc geninfo_all_blocks=1 00:33:07.368 --rc geninfo_unexecuted_blocks=1 00:33:07.368 00:33:07.368 ' 00:33:07.368 16:26:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.368 16:26:08 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.368 16:26:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.368 16:26:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.368 16:26:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.368 16:26:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:07.368 16:26:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:07.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.368 16:26:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:07.368 16:26:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:07.368 16:26:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:07.368 16:26:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:07.368 16:26:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.368 16:26:08 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.369 16:26:08 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.369 16:26:08 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.369 16:26:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.369 16:26:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:07.369 16:26:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.369 16:26:08 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.369 16:26:08 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.369 16:26:08 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.369 16:26:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:13.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:13.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:13.940 Found net devices under 0000:86:00.0: cvl_0_0 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:13.940 Found net devices under 0000:86:00.1: cvl_0_1 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.940 16:26:13 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.941 16:26:13 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.941 16:26:13 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.941 16:26:13 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:33:13.941 00:33:13.941 --- 10.0.0.2 ping statistics --- 00:33:13.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.941 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:13.941 00:33:13.941 --- 10.0.0.1 ping statistics --- 00:33:13.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.941 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:13.941 16:26:14 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:16.478 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:16.478 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:16.478 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.478 16:26:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:16.478 16:26:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2989839 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:16.478 16:26:16 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2989839 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2989839 ']' 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.478 16:26:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:16.478 [2024-11-20 16:26:17.034726] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:33:16.479 [2024-11-20 16:26:17.034768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.479 [2024-11-20 16:26:17.114187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.479 [2024-11-20 16:26:17.154983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.479 [2024-11-20 16:26:17.155022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.479 [2024-11-20 16:26:17.155029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.479 [2024-11-20 16:26:17.155035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.479 [2024-11-20 16:26:17.155041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.479 [2024-11-20 16:26:17.155615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:16.479 16:26:17 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:16.479 16:26:17 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.479 16:26:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:16.479 16:26:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:16.479 [2024-11-20 16:26:17.287284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.479 16:26:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.479 16:26:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:16.738 ************************************ 00:33:16.738 START TEST fio_dif_1_default 00:33:16.738 ************************************ 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:16.738 bdev_null0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:16.738 [2024-11-20 16:26:17.359634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.738 { 00:33:16.738 "params": { 00:33:16.738 "name": "Nvme$subsystem", 00:33:16.738 "trtype": "$TEST_TRANSPORT", 00:33:16.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.738 "adrfam": "ipv4", 00:33:16.738 "trsvcid": "$NVMF_PORT", 00:33:16.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.738 "hdgst": ${hdgst:-false}, 00:33:16.738 "ddgst": ${ddgst:-false} 00:33:16.738 }, 00:33:16.738 "method": "bdev_nvme_attach_controller" 00:33:16.738 } 00:33:16.738 EOF 00:33:16.738 )") 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:16.738 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.739 "params": { 00:33:16.739 "name": "Nvme0", 00:33:16.739 "trtype": "tcp", 00:33:16.739 "traddr": "10.0.0.2", 00:33:16.739 "adrfam": "ipv4", 00:33:16.739 "trsvcid": "4420", 00:33:16.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:16.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:16.739 "hdgst": false, 00:33:16.739 "ddgst": false 00:33:16.739 }, 00:33:16.739 "method": "bdev_nvme_attach_controller" 00:33:16.739 }' 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:16.739 16:26:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:16.998 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:16.998 fio-3.35 00:33:16.998 Starting 1 thread 00:33:29.208 00:33:29.208 filename0: (groupid=0, jobs=1): err= 0: pid=2990205: Wed Nov 20 16:26:28 2024 00:33:29.208 read: IOPS=188, BW=756KiB/s (774kB/s)(7568KiB/10015msec) 00:33:29.208 slat (nsec): min=6123, max=28535, avg=6393.55, stdev=693.81 00:33:29.208 clat (usec): min=385, max=45027, avg=21153.79, stdev=20696.67 00:33:29.208 lat (usec): min=391, max=45055, avg=21160.18, stdev=20696.62 00:33:29.208 clat percentiles (usec): 00:33:29.208 | 1.00th=[ 396], 5.00th=[ 424], 10.00th=[ 461], 20.00th=[ 478], 00:33:29.208 | 30.00th=[ 494], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[41157], 00:33:29.208 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:33:29.208 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:33:29.208 | 99.99th=[44827] 00:33:29.208 bw ( KiB/s): min= 704, max= 832, per=99.91%, avg=755.20, stdev=33.48, samples=20 00:33:29.208 iops : min= 176, max= 208, avg=188.80, stdev= 8.37, samples=20 00:33:29.208 lat (usec) : 500=32.72%, 750=17.39% 00:33:29.208 lat (msec) : 50=49.89% 00:33:29.208 cpu : usr=92.38%, sys=7.35%, ctx=17, majf=0, minf=0 00:33:29.208 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.208 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.208 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:29.208 00:33:29.208 Run status group 0 (all jobs): 00:33:29.208 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7568KiB (7750kB), run=10015-10015msec 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 00:33:29.208 real 0m11.250s 00:33:29.208 user 0m16.307s 00:33:29.208 sys 0m1.097s 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 ************************************ 00:33:29.208 END TEST fio_dif_1_default 00:33:29.208 ************************************ 00:33:29.208 16:26:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:29.208 16:26:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:29.208 16:26:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 ************************************ 00:33:29.208 START TEST fio_dif_1_multi_subsystems 00:33:29.208 ************************************ 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 bdev_null0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 [2024-11-20 16:26:28.687582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.208 bdev_null1 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:29.208 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:29.209 { 00:33:29.209 "params": { 00:33:29.209 "name": "Nvme$subsystem", 00:33:29.209 "trtype": "$TEST_TRANSPORT", 00:33:29.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.209 "adrfam": "ipv4", 00:33:29.209 "trsvcid": "$NVMF_PORT", 00:33:29.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.209 "hdgst": ${hdgst:-false}, 00:33:29.209 "ddgst": ${ddgst:-false} 00:33:29.209 }, 00:33:29.209 "method": "bdev_nvme_attach_controller" 00:33:29.209 } 00:33:29.209 EOF 00:33:29.209 )") 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:29.209 { 00:33:29.209 "params": { 00:33:29.209 "name": "Nvme$subsystem", 00:33:29.209 "trtype": "$TEST_TRANSPORT", 00:33:29.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.209 "adrfam": "ipv4", 00:33:29.209 "trsvcid": "$NVMF_PORT", 00:33:29.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.209 "hdgst": ${hdgst:-false}, 00:33:29.209 "ddgst": ${ddgst:-false} 00:33:29.209 }, 00:33:29.209 "method": "bdev_nvme_attach_controller" 00:33:29.209 } 00:33:29.209 EOF 00:33:29.209 )") 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:29.209 "params": { 00:33:29.209 "name": "Nvme0", 00:33:29.209 "trtype": "tcp", 00:33:29.209 "traddr": "10.0.0.2", 00:33:29.209 "adrfam": "ipv4", 00:33:29.209 "trsvcid": "4420", 00:33:29.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:29.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:29.209 "hdgst": false, 00:33:29.209 "ddgst": false 00:33:29.209 }, 00:33:29.209 "method": "bdev_nvme_attach_controller" 00:33:29.209 },{ 00:33:29.209 "params": { 00:33:29.209 "name": "Nvme1", 00:33:29.209 "trtype": "tcp", 00:33:29.209 "traddr": "10.0.0.2", 00:33:29.209 "adrfam": "ipv4", 00:33:29.209 "trsvcid": "4420", 00:33:29.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:29.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:29.209 "hdgst": false, 00:33:29.209 "ddgst": false 00:33:29.209 }, 00:33:29.209 "method": "bdev_nvme_attach_controller" 00:33:29.209 }' 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:29.209 16:26:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.209 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:29.209 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:29.209 fio-3.35 00:33:29.209 Starting 2 threads 00:33:39.189 00:33:39.189 filename0: (groupid=0, jobs=1): err= 0: pid=2992177: Wed Nov 20 16:26:39 2024 00:33:39.189 read: IOPS=95, BW=383KiB/s (393kB/s)(3840KiB/10018msec) 00:33:39.189 slat (nsec): min=6213, max=41926, avg=8440.42, stdev=4443.63 00:33:39.189 clat (usec): min=40863, max=42076, avg=41711.91, stdev=440.76 00:33:39.189 lat (usec): min=40870, max=42089, avg=41720.35, stdev=440.16 00:33:39.189 clat percentiles (usec): 00:33:39.189 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:39.189 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:39.189 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:39.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:39.189 | 99.99th=[42206] 00:33:39.189 bw ( KiB/s): min= 352, max= 384, per=49.75%, avg=382.40, stdev= 7.16, samples=20 00:33:39.189 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:33:39.189 lat (msec) : 50=100.00% 00:33:39.189 cpu : usr=97.27%, sys=2.45%, ctx=14, majf=0, minf=202 00:33:39.189 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.189 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.189 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:39.189 filename1: (groupid=0, jobs=1): err= 0: pid=2992178: Wed Nov 20 16:26:39 2024 00:33:39.189 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10023msec) 00:33:39.189 slat (nsec): min=6255, max=53410, avg=8682.46, stdev=4245.56 00:33:39.189 clat (usec): min=528, max=42166, avg=41558.92, stdev=2682.99 00:33:39.189 lat (usec): min=542, max=42179, avg=41567.60, stdev=2681.93 00:33:39.189 clat percentiles (usec): 00:33:39.189 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:39.189 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:39.189 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:39.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:39.189 | 99.99th=[42206] 00:33:39.189 bw ( KiB/s): min= 352, max= 448, per=50.01%, avg=384.00, stdev=17.98, samples=20 00:33:39.189 iops : min= 88, max= 112, avg=96.00, stdev= 4.50, samples=20 00:33:39.189 lat (usec) : 750=0.41% 00:33:39.189 lat (msec) : 50=99.59% 00:33:39.189 cpu : usr=97.02%, sys=2.69%, ctx=22, majf=0, minf=122 00:33:39.189 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.189 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.189 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:39.189 00:33:39.189 Run status group 0 (all jobs): 00:33:39.189 READ: bw=768KiB/s (786kB/s), 383KiB/s-385KiB/s (393kB/s-394kB/s), io=7696KiB (7881kB), run=10018-10023msec 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:39.449 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.450 00:33:39.450 real 0m11.499s 00:33:39.450 user 0m26.201s 00:33:39.450 sys 0m0.851s 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 ************************************ 00:33:39.450 END TEST fio_dif_1_multi_subsystems 00:33:39.450 ************************************ 00:33:39.450 16:26:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:39.450 16:26:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:39.450 16:26:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 ************************************ 00:33:39.450 START TEST fio_dif_rand_params 00:33:39.450 ************************************ 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 bdev_null0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.450 [2024-11-20 16:26:40.260025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.450 { 00:33:39.450 "params": { 00:33:39.450 "name": "Nvme$subsystem", 00:33:39.450 "trtype": "$TEST_TRANSPORT", 00:33:39.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.450 "adrfam": "ipv4", 00:33:39.450 "trsvcid": "$NVMF_PORT", 00:33:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.450 "hdgst": ${hdgst:-false}, 00:33:39.450 "ddgst": ${ddgst:-false} 00:33:39.450 }, 00:33:39.450 "method": "bdev_nvme_attach_controller" 00:33:39.450 } 00:33:39.450 EOF 00:33:39.450 )") 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:39.450 16:26:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.450 "params": { 00:33:39.450 "name": "Nvme0", 00:33:39.450 "trtype": "tcp", 00:33:39.450 "traddr": "10.0.0.2", 00:33:39.450 "adrfam": "ipv4", 00:33:39.450 "trsvcid": "4420", 00:33:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.450 "hdgst": false, 00:33:39.450 "ddgst": false 00:33:39.450 }, 00:33:39.450 "method": "bdev_nvme_attach_controller" 00:33:39.450 }' 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:39.710 16:26:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.981 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:39.981 ... 00:33:39.981 fio-3.35 00:33:39.981 Starting 3 threads 00:33:46.550 00:33:46.550 filename0: (groupid=0, jobs=1): err= 0: pid=2994131: Wed Nov 20 16:26:46 2024 00:33:46.550 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(186MiB/5045msec) 00:33:46.550 slat (nsec): min=6617, max=48700, avg=23056.73, stdev=7176.77 00:33:46.550 clat (usec): min=3373, max=52928, avg=10102.85, stdev=6831.98 00:33:46.550 lat (usec): min=3381, max=52959, avg=10125.90, stdev=6831.34 00:33:46.550 clat percentiles (usec): 00:33:46.550 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 7242], 20.00th=[ 8094], 00:33:46.550 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:33:46.550 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:33:46.550 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[52691], 00:33:46.550 | 99.99th=[52691] 00:33:46.550 bw ( KiB/s): min=29184, max=45056, per=31.25%, avg=38067.20, stdev=5542.14, samples=10 00:33:46.550 iops : min= 228, max= 352, avg=297.40, stdev=43.30, samples=10 00:33:46.550 lat (msec) : 4=0.60%, 10=76.17%, 20=20.27%, 50=2.75%, 100=0.20% 00:33:46.550 cpu : usr=97.13%, sys=2.56%, ctx=7, majf=0, minf=18 00:33:46.550 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.550 issued rwts: total=1490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:46.550 filename0: (groupid=0, jobs=1): err= 0: pid=2994132: Wed Nov 20 16:26:46 2024 00:33:46.550 read: IOPS=321, BW=40.1MiB/s (42.1MB/s)(201MiB/5003msec) 00:33:46.550 slat (nsec): min=6155, max=89148, avg=15880.11, stdev=7299.06 00:33:46.550 clat (usec): min=3217, max=49562, avg=9325.87, stdev=3489.01 00:33:46.550 lat (usec): min=3225, max=49586, avg=9341.75, stdev=3489.85 00:33:46.550 clat percentiles (usec): 00:33:46.550 | 1.00th=[ 3654], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 7439], 00:33:46.550 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:33:46.550 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:33:46.550 | 99.00th=[12780], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:33:46.550 | 99.99th=[49546] 00:33:46.550 bw ( KiB/s): min=36352, max=49920, per=33.71%, avg=41062.40, stdev=3692.48, samples=10 00:33:46.550 iops : min= 284, max= 390, avg=320.80, stdev=28.85, samples=10 00:33:46.550 lat (msec) : 4=2.68%, 10=61.15%, 20=35.62%, 50=0.56% 00:33:46.550 cpu : usr=95.62%, sys=4.04%, ctx=16, majf=0, minf=95 00:33:46.550 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.550 issued rwts: total=1606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:46.550 filename0: (groupid=0, jobs=1): err= 0: pid=2994133: Wed Nov 20 16:26:46 2024 00:33:46.550 read: IOPS=338, BW=42.3MiB/s (44.3MB/s)(213MiB/5047msec) 00:33:46.550 slat (nsec): min=6099, max=45666, avg=15585.69, stdev=6719.91 00:33:46.550 clat (usec): min=3452, max=51235, avg=8826.76, stdev=5237.15 00:33:46.550 lat (usec): min=3459, max=51260, avg=8842.35, stdev=5237.08 00:33:46.550 clat percentiles (usec): 00:33:46.550 | 1.00th=[ 4293], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7439], 00:33:46.550 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:33:46.550 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[ 9896], 00:33:46.550 | 99.00th=[45876], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:33:46.550 | 99.99th=[51119] 00:33:46.550 bw ( KiB/s): min=39168, max=47104, per=35.81%, avg=43622.40, stdev=2757.73, samples=10 00:33:46.550 iops : min= 306, max= 368, avg=340.80, stdev=21.54, samples=10 00:33:46.550 lat (msec) : 4=0.70%, 10=94.73%, 20=2.87%, 50=1.58%, 100=0.12% 00:33:46.550 cpu : usr=96.00%, sys=3.67%, ctx=13, majf=0, minf=60 00:33:46.550 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.550 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:46.550 00:33:46.550 Run status group 0 (all jobs): 00:33:46.550 READ: bw=119MiB/s (125MB/s), 36.9MiB/s-42.3MiB/s (38.7MB/s-44.3MB/s), io=600MiB (630MB), run=5003-5047msec 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.550 bdev_null0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.550 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 [2024-11-20 16:26:46.522969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 bdev_null1 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 bdev_null2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.551 { 00:33:46.551 "params": { 00:33:46.551 "name": "Nvme$subsystem", 00:33:46.551 "trtype": "$TEST_TRANSPORT", 00:33:46.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.551 "adrfam": "ipv4", 00:33:46.551 "trsvcid": "$NVMF_PORT", 00:33:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.551 "hdgst": ${hdgst:-false}, 00:33:46.551 "ddgst": ${ddgst:-false} 00:33:46.551 }, 00:33:46.551 "method": "bdev_nvme_attach_controller" 00:33:46.551 } 00:33:46.551 EOF 00:33:46.551 )") 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.551 { 00:33:46.551 "params": { 00:33:46.551 "name": "Nvme$subsystem", 00:33:46.551 "trtype": "$TEST_TRANSPORT", 00:33:46.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.551 "adrfam": "ipv4", 00:33:46.551 "trsvcid": "$NVMF_PORT", 00:33:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.551 "hdgst": ${hdgst:-false}, 00:33:46.551 "ddgst": ${ddgst:-false} 00:33:46.551 }, 00:33:46.551 "method": "bdev_nvme_attach_controller" 00:33:46.551 } 00:33:46.551 EOF 00:33:46.551 )") 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.551 { 00:33:46.551 "params": { 00:33:46.551 "name": "Nvme$subsystem", 00:33:46.551 "trtype": "$TEST_TRANSPORT", 00:33:46.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.551 "adrfam": "ipv4", 00:33:46.551 "trsvcid": "$NVMF_PORT", 00:33:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.551 "hdgst": ${hdgst:-false}, 00:33:46.551 "ddgst": ${ddgst:-false} 00:33:46.551 }, 00:33:46.551 "method": "bdev_nvme_attach_controller" 00:33:46.551 } 00:33:46.551 EOF 00:33:46.551 )") 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:46.551 16:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:46.551 "params": { 00:33:46.551 "name": "Nvme0", 00:33:46.551 "trtype": "tcp", 00:33:46.551 "traddr": "10.0.0.2", 00:33:46.551 "adrfam": "ipv4", 00:33:46.551 "trsvcid": "4420", 00:33:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.551 "hdgst": false, 00:33:46.551 "ddgst": false 00:33:46.551 }, 00:33:46.551 "method": "bdev_nvme_attach_controller" 00:33:46.551 },{ 00:33:46.551 "params": { 00:33:46.551 "name": "Nvme1", 00:33:46.551 "trtype": "tcp", 00:33:46.551 "traddr": "10.0.0.2", 00:33:46.551 "adrfam": "ipv4", 00:33:46.551 "trsvcid": "4420", 00:33:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.551 "hdgst": false, 00:33:46.552 "ddgst": false 00:33:46.552 }, 00:33:46.552 "method": "bdev_nvme_attach_controller" 00:33:46.552 },{ 00:33:46.552 "params": { 00:33:46.552 "name": "Nvme2", 00:33:46.552 "trtype": "tcp", 00:33:46.552 "traddr": "10.0.0.2", 00:33:46.552 "adrfam": "ipv4", 00:33:46.552 "trsvcid": "4420", 00:33:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:46.552 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:46.552 "hdgst": false, 00:33:46.552 "ddgst": false 00:33:46.552 }, 00:33:46.552 "method": "bdev_nvme_attach_controller" 00:33:46.552 }' 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:46.552 16:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.552 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:46.552 ... 00:33:46.552 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:46.552 ... 00:33:46.552 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:46.552 ... 00:33:46.552 fio-3.35 00:33:46.552 Starting 24 threads 00:33:58.900 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995185: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.3MiB/10021msec) 00:33:58.900 slat (nsec): min=6883, max=59457, avg=16049.66, stdev=8763.95 00:33:58.900 clat (usec): min=993, max=34430, avg=26802.99, stdev=4670.94 00:33:58.900 lat (usec): min=1003, max=34438, avg=26819.04, stdev=4672.18 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[ 1532], 5.00th=[17957], 10.00th=[27657], 20.00th=[27919], 00:33:58.900 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:58.900 | 99.00th=[28705], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:33:58.900 | 99.99th=[34341] 00:33:58.900 bw ( KiB/s): min= 2192, max= 4112, per=4.33%, avg=2375.20, stdev=410.77, samples=20 00:33:58.900 iops : min= 548, max= 1028, avg=593.80, stdev=102.69, samples=20 00:33:58.900 lat (usec) : 1000=0.02% 00:33:58.900 lat (msec) : 2=1.60%, 4=0.77%, 10=0.34%, 20=3.12%, 50=94.16% 00:33:58.900 cpu : usr=98.35%, sys=1.30%, ctx=16, majf=0, minf=9 00:33:58.900 IO depths : 1=0.8%, 2=6.6%, 4=23.6%, 8=57.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 issued rwts: total=5954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995186: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10002msec) 00:33:58.900 slat (nsec): min=4384, max=57925, avg=25015.34, stdev=9159.95 00:33:58.900 clat (usec): min=20040, max=33483, avg=27897.06, stdev=521.52 00:33:58.900 lat (usec): min=20075, max=33510, avg=27922.07, stdev=519.84 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.900 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.900 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31851], 99.95th=[31851], 00:33:58.900 | 99.99th=[33424] 00:33:58.900 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:58.900 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:58.900 lat (msec) : 50=100.00% 00:33:58.900 cpu : usr=98.48%, sys=1.18%, ctx=13, majf=0, minf=9 00:33:58.900 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995187: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:33:58.900 slat (nsec): min=6894, max=64163, avg=17154.24, stdev=6490.09 00:33:58.900 clat (usec): min=12581, max=54195, avg=27936.54, stdev=1694.48 00:33:58.900 lat (usec): min=12588, max=54228, avg=27953.69, stdev=1695.64 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.900 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.900 | 99.00th=[28967], 99.50th=[29492], 99.90th=[54264], 99.95th=[54264], 00:33:58.900 | 99.99th=[54264] 00:33:58.900 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2270.32, stdev=71.93, samples=19 00:33:58.900 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:33:58.900 lat (msec) : 20=0.56%, 50=99.16%, 100=0.28% 00:33:58.900 cpu : usr=98.27%, sys=1.39%, ctx=15, majf=0, minf=9 00:33:58.900 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995188: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10001msec) 00:33:58.900 slat (nsec): min=7539, max=58961, avg=19871.12, stdev=8147.78 00:33:58.900 clat (usec): min=19882, max=31582, avg=27933.90, stdev=505.83 00:33:58.900 lat (usec): min=19923, max=31602, avg=27953.77, stdev=504.61 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.900 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:58.900 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31589], 99.95th=[31589], 00:33:58.900 | 99.99th=[31589] 00:33:58.900 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:58.900 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:58.900 lat (msec) : 20=0.09%, 50=99.91% 00:33:58.900 cpu : usr=98.53%, sys=1.14%, ctx=9, majf=0, minf=9 00:33:58.900 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995189: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:58.900 slat (nsec): min=5285, max=59511, avg=26535.73, stdev=8804.59 00:33:58.900 clat (usec): min=13043, max=49127, avg=27847.64, stdev=1442.86 00:33:58.900 lat (usec): min=13059, max=49142, avg=27874.18, stdev=1442.47 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.900 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:58.900 | 99.00th=[28705], 99.50th=[29230], 99.90th=[49021], 99.95th=[49021], 00:33:58.900 | 99.99th=[49021] 00:33:58.900 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2270.53, stdev=71.25, samples=19 00:33:58.900 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:33:58.900 lat (msec) : 20=0.32%, 50=99.68% 00:33:58.900 cpu : usr=98.36%, sys=1.30%, ctx=14, majf=0, minf=9 00:33:58.900 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995190: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=574, BW=2300KiB/s (2355kB/s)(22.5MiB/10001msec) 00:33:58.900 slat (nsec): min=6850, max=79581, avg=20639.13, stdev=11848.82 00:33:58.900 clat (usec): min=8912, max=31565, avg=27636.90, stdev=1936.93 00:33:58.900 lat (usec): min=8930, max=31572, avg=27657.54, stdev=1937.81 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[15008], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:58.900 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.900 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[31589], 00:33:58.900 | 99.99th=[31589] 00:33:58.900 bw ( KiB/s): min= 2176, max= 2608, per=4.19%, avg=2299.79, stdev=88.58, samples=19 00:33:58.900 iops : min= 544, max= 652, avg=574.95, stdev=22.14, samples=19 00:33:58.900 lat (msec) : 10=0.24%, 20=1.29%, 50=98.47% 00:33:58.900 cpu : usr=98.38%, sys=1.27%, ctx=10, majf=0, minf=9 00:33:58.900 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.900 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.900 filename0: (groupid=0, jobs=1): err= 0: pid=2995191: Wed Nov 20 16:26:57 2024 00:33:58.900 read: IOPS=569, BW=2279KiB/s (2333kB/s)(22.3MiB/10009msec) 00:33:58.900 slat (nsec): min=5063, max=55068, avg=25826.83, stdev=8344.57 00:33:58.900 clat (usec): min=13018, max=46562, avg=27849.70, stdev=1499.70 00:33:58.900 lat (usec): min=13031, max=46580, avg=27875.52, stdev=1498.96 00:33:58.900 clat percentiles (usec): 00:33:58.900 | 1.00th=[22414], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.900 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:58.900 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.900 | 99.00th=[29230], 99.50th=[34866], 99.90th=[46400], 99.95th=[46400], 00:33:58.901 | 99.99th=[46400] 00:33:58.901 bw ( KiB/s): min= 2096, max= 2304, per=4.15%, avg=2272.84, stdev=64.11, samples=19 00:33:58.901 iops : min= 524, max= 576, avg=568.21, stdev=16.03, samples=19 00:33:58.901 lat (msec) : 20=0.40%, 50=99.60% 00:33:58.901 cpu : usr=98.57%, sys=1.08%, ctx=13, majf=0, minf=9 00:33:58.901 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename0: (groupid=0, jobs=1): err= 0: pid=2995192: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10011msec) 00:33:58.901 slat (nsec): min=6203, max=65063, avg=16151.89, stdev=7975.04 00:33:58.901 clat (usec): min=17215, max=29399, avg=27911.94, stdev=752.13 00:33:58.901 lat (usec): min=17222, max=29413, avg=27928.09, stdev=751.86 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.901 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:58.901 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:33:58.901 | 99.99th=[29492] 00:33:58.901 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:58.901 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:58.901 lat (msec) : 20=0.28%, 50=99.72% 00:33:58.901 cpu : usr=98.30%, sys=1.35%, ctx=13, majf=0, minf=9 00:33:58.901 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename1: (groupid=0, jobs=1): err= 0: pid=2995193: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=572, BW=2292KiB/s (2347kB/s)(22.4MiB/10007msec) 00:33:58.901 slat (nsec): min=6914, max=59732, avg=18235.97, stdev=8556.81 00:33:58.901 clat (usec): min=8407, max=29338, avg=27775.97, stdev=1624.72 00:33:58.901 lat (usec): min=8419, max=29352, avg=27794.21, stdev=1624.77 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[15664], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.901 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:58.901 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:33:58.901 | 99.99th=[29230] 00:33:58.901 bw ( KiB/s): min= 2176, max= 2480, per=4.17%, avg=2287.20, stdev=69.16, samples=20 00:33:58.901 iops : min= 544, max= 620, avg=571.80, stdev=17.29, samples=20 00:33:58.901 lat (msec) : 10=0.23%, 20=0.98%, 50=98.80% 00:33:58.901 cpu : usr=98.32%, sys=1.33%, ctx=11, majf=0, minf=9 00:33:58.901 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename1: (groupid=0, jobs=1): err= 0: pid=2995194: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:33:58.901 slat (nsec): min=7722, max=80311, avg=21039.21, stdev=11602.49 00:33:58.901 clat (usec): min=10284, max=29412, avg=27736.07, stdev=1522.20 00:33:58.901 lat (usec): min=10297, max=29424, avg=27757.11, stdev=1522.95 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[17433], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.901 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.901 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29492], 00:33:58.901 | 99.99th=[29492] 00:33:58.901 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2290.53, stdev=58.73, samples=19 00:33:58.901 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:58.901 lat (msec) : 20=1.12%, 50=98.88% 00:33:58.901 cpu : usr=98.46%, sys=1.20%, ctx=5, majf=0, minf=9 00:33:58.901 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename1: (groupid=0, jobs=1): err= 0: pid=2995195: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10011msec) 00:33:58.901 slat (nsec): min=6183, max=36473, avg=17662.19, stdev=4849.95 00:33:58.901 clat (usec): min=18878, max=29776, avg=27884.15, stdev=745.56 00:33:58.901 lat (usec): min=18885, max=29789, avg=27901.81, stdev=745.94 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[25822], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.901 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.901 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29754], 99.95th=[29754], 00:33:58.901 | 99.99th=[29754] 00:33:58.901 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:58.901 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:58.901 lat (msec) : 20=0.56%, 50=99.44% 00:33:58.901 cpu : usr=98.63%, sys=1.04%, ctx=12, majf=0, minf=9 00:33:58.901 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename1: (groupid=0, jobs=1): err= 0: pid=2995196: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:58.901 slat (nsec): min=5606, max=61398, avg=26861.79, stdev=8507.45 00:33:58.901 clat (usec): min=13047, max=48931, avg=27847.71, stdev=1435.11 00:33:58.901 lat (usec): min=13060, max=48948, avg=27874.57, stdev=1434.61 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.901 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.901 | 99.00th=[28705], 99.50th=[29230], 99.90th=[49021], 99.95th=[49021], 00:33:58.901 | 99.99th=[49021] 00:33:58.901 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2270.53, stdev=71.25, samples=19 00:33:58.901 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:33:58.901 lat (msec) : 20=0.35%, 50=99.65% 00:33:58.901 cpu : usr=98.46%, sys=1.19%, ctx=12, majf=0, minf=9 00:33:58.901 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename1: (groupid=0, jobs=1): err= 0: pid=2995197: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10013msec) 00:33:58.901 slat (nsec): min=4117, max=57466, avg=25402.05, stdev=7140.53 00:33:58.901 clat (usec): min=12942, max=55618, avg=27799.12, stdev=1548.29 00:33:58.901 lat (usec): min=12966, max=55630, avg=27824.52, stdev=1548.51 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[20317], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.901 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.901 | 99.00th=[28967], 99.50th=[29492], 99.90th=[47973], 99.95th=[48497], 00:33:58.901 | 99.99th=[55837] 00:33:58.901 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2279.58, stdev=49.57, samples=19 00:33:58.901 iops : min= 544, max= 576, avg=569.89, stdev=12.39, samples=19 00:33:58.901 lat (msec) : 20=0.77%, 50=99.20%, 100=0.03% 00:33:58.901 cpu : usr=98.42%, sys=1.23%, ctx=9, majf=0, minf=9 00:33:58.901 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:58.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.901 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.901 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.901 filename1: (groupid=0, jobs=1): err= 0: pid=2995198: Wed Nov 20 16:26:57 2024 00:33:58.901 read: IOPS=571, BW=2285KiB/s (2339kB/s)(22.3MiB/10001msec) 00:33:58.901 slat (nsec): min=7121, max=37308, avg=17013.65, stdev=5280.41 00:33:58.901 clat (usec): min=12617, max=29875, avg=27867.13, stdev=1025.56 00:33:58.901 lat (usec): min=12633, max=29902, avg=27884.15, stdev=1025.79 00:33:58.901 clat percentiles (usec): 00:33:58.901 | 1.00th=[23200], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.901 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.901 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.901 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:33:58.901 | 99.99th=[29754] 00:33:58.901 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2283.79, stdev=47.95, samples=19 00:33:58.901 iops : min= 544, max= 576, avg=570.95, stdev=11.99, samples=19 00:33:58.902 lat (msec) : 20=0.56%, 50=99.44% 00:33:58.902 cpu : usr=98.35%, sys=1.31%, ctx=12, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename1: (groupid=0, jobs=1): err= 0: pid=2995199: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:58.902 slat (nsec): min=5624, max=60083, avg=27272.96, stdev=8653.44 00:33:58.902 clat (usec): min=12978, max=49368, avg=27845.95, stdev=1455.27 00:33:58.902 lat (usec): min=12992, max=49385, avg=27873.22, stdev=1454.75 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.902 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.902 | 99.00th=[28705], 99.50th=[29230], 99.90th=[49546], 99.95th=[49546], 00:33:58.902 | 99.99th=[49546] 00:33:58.902 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2270.32, stdev=71.93, samples=19 00:33:58.902 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:33:58.902 lat (msec) : 20=0.33%, 50=99.67% 00:33:58.902 cpu : usr=98.57%, sys=1.06%, ctx=15, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename1: (groupid=0, jobs=1): err= 0: pid=2995200: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10001msec) 00:33:58.902 slat (nsec): min=7242, max=64978, avg=27060.93, stdev=8897.71 00:33:58.902 clat (usec): min=19973, max=34716, avg=27866.15, stdev=529.55 00:33:58.902 lat (usec): min=19998, max=34748, avg=27893.21, stdev=528.70 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.902 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.902 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31589], 99.95th=[31589], 00:33:58.902 | 99.99th=[34866] 00:33:58.902 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:33:58.902 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:58.902 lat (msec) : 20=0.04%, 50=99.96% 00:33:58.902 cpu : usr=98.68%, sys=0.98%, ctx=7, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename2: (groupid=0, jobs=1): err= 0: pid=2995201: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10009msec) 00:33:58.902 slat (nsec): min=7253, max=37500, avg=17210.25, stdev=4733.86 00:33:58.902 clat (usec): min=9069, max=29882, avg=27727.40, stdev=1896.91 00:33:58.902 lat (usec): min=9078, max=29894, avg=27744.61, stdev=1896.99 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[12518], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:58.902 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.902 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:33:58.902 | 99.99th=[29754] 00:33:58.902 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2291.20, stdev=82.01, samples=20 00:33:58.902 iops : min= 544, max= 640, avg=572.80, stdev=20.50, samples=20 00:33:58.902 lat (msec) : 10=0.24%, 20=1.01%, 50=98.75% 00:33:58.902 cpu : usr=98.40%, sys=1.25%, ctx=18, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename2: (groupid=0, jobs=1): err= 0: pid=2995202: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:33:58.902 slat (nsec): min=6910, max=79636, avg=20144.72, stdev=12222.27 00:33:58.902 clat (usec): min=9351, max=29501, avg=27735.42, stdev=1652.61 00:33:58.902 lat (usec): min=9362, max=29518, avg=27755.57, stdev=1653.10 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[16712], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.902 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.902 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:33:58.902 | 99.99th=[29492] 00:33:58.902 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2290.53, stdev=58.73, samples=19 00:33:58.902 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:58.902 lat (msec) : 10=0.40%, 20=0.72%, 50=98.88% 00:33:58.902 cpu : usr=98.48%, sys=1.17%, ctx=13, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename2: (groupid=0, jobs=1): err= 0: pid=2995203: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:33:58.902 slat (nsec): min=7041, max=80134, avg=20081.48, stdev=11322.57 00:33:58.902 clat (usec): min=9195, max=29753, avg=27758.61, stdev=1629.51 00:33:58.902 lat (usec): min=9206, max=29777, avg=27778.69, stdev=1629.63 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[20579], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.902 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:58.902 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29754], 99.95th=[29754], 00:33:58.902 | 99.99th=[29754] 00:33:58.902 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2290.53, stdev=58.73, samples=19 00:33:58.902 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:58.902 lat (msec) : 10=0.44%, 20=0.51%, 50=99.06% 00:33:58.902 cpu : usr=98.40%, sys=1.26%, ctx=10, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename2: (groupid=0, jobs=1): err= 0: pid=2995204: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:58.902 slat (nsec): min=5831, max=59046, avg=27659.43, stdev=8553.66 00:33:58.902 clat (usec): min=12882, max=48732, avg=27852.30, stdev=1431.69 00:33:58.902 lat (usec): min=12908, max=48749, avg=27879.96, stdev=1430.98 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.902 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.902 | 99.00th=[28705], 99.50th=[28967], 99.90th=[48497], 99.95th=[48497], 00:33:58.902 | 99.99th=[48497] 00:33:58.902 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2270.53, stdev=71.25, samples=19 00:33:58.902 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:33:58.902 lat (msec) : 20=0.33%, 50=99.67% 00:33:58.902 cpu : usr=98.33%, sys=1.32%, ctx=13, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.902 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.902 filename2: (groupid=0, jobs=1): err= 0: pid=2995205: Wed Nov 20 16:26:57 2024 00:33:58.902 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:33:58.902 slat (nsec): min=5650, max=64055, avg=27960.65, stdev=8823.53 00:33:58.902 clat (usec): min=12915, max=54147, avg=27844.65, stdev=1453.82 00:33:58.902 lat (usec): min=12928, max=54162, avg=27872.61, stdev=1453.19 00:33:58.902 clat percentiles (usec): 00:33:58.902 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:58.902 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:58.902 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:58.902 | 99.00th=[28705], 99.50th=[29230], 99.90th=[48497], 99.95th=[48497], 00:33:58.902 | 99.99th=[54264] 00:33:58.902 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2270.53, stdev=71.25, samples=19 00:33:58.902 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:33:58.902 lat (msec) : 20=0.35%, 50=99.61%, 100=0.04% 00:33:58.902 cpu : usr=98.29%, sys=1.38%, ctx=8, majf=0, minf=9 00:33:58.902 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.903 filename2: (groupid=0, jobs=1): err= 0: pid=2995206: Wed Nov 20 16:26:57 2024 00:33:58.903 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:33:58.903 slat (nsec): min=7319, max=92446, avg=50205.97, stdev=13143.00 00:33:58.903 clat (usec): min=13174, max=48333, avg=27660.17, stdev=1419.43 00:33:58.903 lat (usec): min=13205, max=48359, avg=27710.38, stdev=1418.66 00:33:58.903 clat percentiles (usec): 00:33:58.903 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:58.903 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:58.903 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:58.903 | 99.00th=[28705], 99.50th=[28967], 99.90th=[48497], 99.95th=[48497], 00:33:58.903 | 99.99th=[48497] 00:33:58.903 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2270.32, stdev=71.93, samples=19 00:33:58.903 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:33:58.903 lat (msec) : 20=0.28%, 50=99.72% 00:33:58.903 cpu : usr=98.60%, sys=1.01%, ctx=12, majf=0, minf=9 00:33:58.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:58.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.903 filename2: (groupid=0, jobs=1): err= 0: pid=2995207: Wed Nov 20 16:26:57 2024 00:33:58.903 read: IOPS=574, BW=2299KiB/s (2354kB/s)(22.5MiB/10004msec) 00:33:58.903 slat (nsec): min=6853, max=76277, avg=14959.70, stdev=10024.98 00:33:58.903 clat (usec): min=8247, max=48345, avg=27778.28, stdev=2605.77 00:33:58.903 lat (usec): min=8254, max=48370, avg=27793.24, stdev=2605.62 00:33:58.903 clat percentiles (usec): 00:33:58.903 | 1.00th=[17433], 5.00th=[24511], 10.00th=[27919], 20.00th=[27919], 00:33:58.903 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:58.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:58.903 | 99.00th=[38536], 99.50th=[39584], 99.90th=[48497], 99.95th=[48497], 00:33:58.903 | 99.99th=[48497] 00:33:58.903 bw ( KiB/s): min= 2064, max= 2416, per=4.18%, avg=2290.53, stdev=77.34, samples=19 00:33:58.903 iops : min= 516, max= 604, avg=572.63, stdev=19.33, samples=19 00:33:58.903 lat (msec) : 10=0.24%, 20=1.91%, 50=97.84% 00:33:58.903 cpu : usr=98.27%, sys=1.38%, ctx=12, majf=0, minf=9 00:33:58.903 IO depths : 1=0.1%, 2=0.2%, 4=1.3%, 8=80.4%, 16=18.0%, 32=0.0%, >=64=0.0% 00:33:58.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 complete : 0=0.0%, 4=89.5%, 8=9.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.903 filename2: (groupid=0, jobs=1): err= 0: pid=2995208: Wed Nov 20 16:26:57 2024 00:33:58.903 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.2MiB/10004msec) 00:33:58.903 slat (nsec): min=6632, max=99316, avg=51221.32, stdev=14297.41 00:33:58.903 clat (usec): min=3807, max=76430, avg=27676.66, stdev=2521.81 00:33:58.903 lat (usec): min=3814, max=76471, avg=27727.88, stdev=2522.31 00:33:58.903 clat percentiles (usec): 00:33:58.903 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:58.903 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:58.903 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:58.903 | 99.00th=[28705], 99.50th=[28967], 99.90th=[69731], 99.95th=[69731], 00:33:58.903 | 99.99th=[76022] 00:33:58.903 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2263.58, stdev=74.55, samples=19 00:33:58.903 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:33:58.903 lat (msec) : 4=0.12%, 20=0.28%, 50=99.31%, 100=0.28% 00:33:58.903 cpu : usr=98.51%, sys=1.08%, ctx=30, majf=0, minf=9 00:33:58.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:58.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.903 issued rwts: total=5687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:58.903 00:33:58.903 Run status group 0 (all jobs): 00:33:58.903 READ: bw=53.5MiB/s (56.1MB/s), 2274KiB/s-2377KiB/s (2328kB/s-2434kB/s), io=536MiB (562MB), run=10001-10021msec 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 bdev_null0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.903 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.904 [2024-11-20 16:26:58.224103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.904 bdev_null1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:58.904 { 00:33:58.904 "params": { 00:33:58.904 "name": "Nvme$subsystem", 00:33:58.904 "trtype": "$TEST_TRANSPORT", 00:33:58.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.904 "adrfam": "ipv4", 00:33:58.904 "trsvcid": "$NVMF_PORT", 00:33:58.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.904 "hdgst": ${hdgst:-false}, 00:33:58.904 "ddgst": ${ddgst:-false} 00:33:58.904 }, 00:33:58.904 "method": "bdev_nvme_attach_controller" 00:33:58.904 } 00:33:58.904 EOF 00:33:58.904 )") 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:58.904 { 00:33:58.904 "params": { 00:33:58.904 "name": "Nvme$subsystem", 00:33:58.904 "trtype": "$TEST_TRANSPORT", 00:33:58.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.904 "adrfam": "ipv4", 00:33:58.904 "trsvcid": "$NVMF_PORT", 00:33:58.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.904 "hdgst": ${hdgst:-false}, 00:33:58.904 "ddgst": ${ddgst:-false} 00:33:58.904 }, 00:33:58.904 "method": "bdev_nvme_attach_controller" 00:33:58.904 } 00:33:58.904 EOF 00:33:58.904 )") 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:58.904 "params": { 00:33:58.904 "name": "Nvme0", 00:33:58.904 "trtype": "tcp", 00:33:58.904 "traddr": "10.0.0.2", 00:33:58.904 "adrfam": "ipv4", 00:33:58.904 "trsvcid": "4420", 00:33:58.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:58.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:58.904 "hdgst": false, 00:33:58.904 "ddgst": false 00:33:58.904 }, 00:33:58.904 "method": "bdev_nvme_attach_controller" 00:33:58.904 },{ 00:33:58.904 "params": { 00:33:58.904 "name": "Nvme1", 00:33:58.904 "trtype": "tcp", 00:33:58.904 "traddr": "10.0.0.2", 00:33:58.904 "adrfam": "ipv4", 00:33:58.904 "trsvcid": "4420", 00:33:58.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:58.904 "hdgst": false, 00:33:58.904 "ddgst": false 00:33:58.904 }, 00:33:58.904 "method": "bdev_nvme_attach_controller" 00:33:58.904 }' 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:58.904 16:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.904 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:58.904 ... 00:33:58.904 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:58.904 ... 00:33:58.904 fio-3.35 00:33:58.904 Starting 4 threads 00:34:04.172 00:34:04.172 filename0: (groupid=0, jobs=1): err= 0: pid=2997174: Wed Nov 20 16:27:04 2024 00:34:04.172 read: IOPS=2484, BW=19.4MiB/s (20.4MB/s)(97.1MiB/5001msec) 00:34:04.172 slat (nsec): min=6173, max=37801, avg=8846.00, stdev=3098.02 00:34:04.172 clat (usec): min=619, max=5707, avg=3194.48, stdev=503.38 00:34:04.172 lat (usec): min=631, max=5714, avg=3203.33, stdev=503.02 00:34:04.172 clat percentiles (usec): 00:34:04.172 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2737], 20.00th=[ 2933], 00:34:04.172 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:04.172 | 70.00th=[ 3294], 80.00th=[ 3490], 90.00th=[ 3785], 95.00th=[ 4178], 00:34:04.172 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 5538], 00:34:04.172 | 99.99th=[ 5669] 00:34:04.172 bw ( KiB/s): min=18560, max=20928, per=23.77%, avg=19916.44, stdev=813.28, samples=9 00:34:04.172 iops : min= 2320, max= 2616, avg=2489.56, stdev=101.66, samples=9 00:34:04.172 lat (usec) : 750=0.02%, 1000=0.01% 00:34:04.172 lat (msec) : 2=0.37%, 4=92.71%, 10=6.89% 00:34:04.173 cpu : usr=95.84%, sys=3.82%, ctx=6, majf=0, minf=9 00:34:04.173 IO depths : 1=0.1%, 2=1.6%, 4=69.5%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 issued rwts: total=12427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:04.173 filename0: (groupid=0, jobs=1): err= 0: pid=2997175: Wed Nov 20 16:27:04 2024 00:34:04.173 read: IOPS=2664, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:34:04.173 slat (nsec): min=6126, max=35514, avg=9024.10, stdev=3129.64 00:34:04.173 clat (usec): min=669, max=5523, avg=2975.33, stdev=427.12 00:34:04.173 lat (usec): min=681, max=5533, avg=2984.35, stdev=426.81 00:34:04.173 clat percentiles (usec): 00:34:04.173 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2638], 00:34:04.173 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:34:04.173 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3720], 00:34:04.173 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5080], 99.95th=[ 5276], 00:34:04.173 | 99.99th=[ 5538] 00:34:04.173 bw ( KiB/s): min=20624, max=22352, per=25.50%, avg=21369.89, stdev=623.84, samples=9 00:34:04.173 iops : min= 2578, max= 2794, avg=2671.22, stdev=77.99, samples=9 00:34:04.173 lat (usec) : 750=0.01%, 1000=0.01% 00:34:04.173 lat (msec) : 2=0.83%, 4=97.10%, 10=2.06% 00:34:04.173 cpu : usr=96.06%, sys=3.60%, ctx=8, majf=0, minf=9 00:34:04.173 IO depths : 1=0.2%, 2=4.5%, 4=67.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 issued rwts: total=13327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:04.173 filename1: (groupid=0, jobs=1): err= 0: pid=2997176: Wed Nov 20 16:27:04 2024 00:34:04.173 read: IOPS=2484, BW=19.4MiB/s (20.4MB/s)(97.1MiB/5001msec) 00:34:04.173 slat (nsec): min=6170, max=29658, avg=8570.22, stdev=2852.89 00:34:04.173 clat (usec): min=940, max=45877, avg=3195.02, stdev=1174.59 00:34:04.173 lat (usec): min=946, max=45896, avg=3203.59, stdev=1174.50 00:34:04.173 clat percentiles (usec): 00:34:04.173 | 1.00th=[ 2180], 5.00th=[ 2573], 10.00th=[ 2737], 20.00th=[ 2933], 00:34:04.173 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:04.173 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3720], 95.00th=[ 4015], 00:34:04.173 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5407], 99.95th=[45876], 00:34:04.173 | 99.99th=[45876] 00:34:04.173 bw ( KiB/s): min=18640, max=20528, per=23.65%, avg=19816.89, stdev=691.58, samples=9 00:34:04.173 iops : min= 2330, max= 2566, avg=2477.11, stdev=86.45, samples=9 00:34:04.173 lat (usec) : 1000=0.05% 00:34:04.173 lat (msec) : 2=0.38%, 4=94.17%, 10=5.34%, 50=0.06% 00:34:04.173 cpu : usr=96.14%, sys=3.54%, ctx=10, majf=0, minf=9 00:34:04.173 IO depths : 1=0.1%, 2=1.8%, 4=71.2%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 issued rwts: total=12425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:04.173 filename1: (groupid=0, jobs=1): err= 0: pid=2997177: Wed Nov 20 16:27:04 2024 00:34:04.173 read: IOPS=2840, BW=22.2MiB/s (23.3MB/s)(111MiB/5001msec) 00:34:04.173 slat (nsec): min=6187, max=34813, avg=9010.86, stdev=3037.03 00:34:04.173 clat (usec): min=904, max=5367, avg=2789.81, stdev=422.61 00:34:04.173 lat (usec): min=922, max=5382, avg=2798.82, stdev=422.38 00:34:04.173 clat percentiles (usec): 00:34:04.173 | 1.00th=[ 1680], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:34:04.173 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2933], 00:34:04.173 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 3458], 00:34:04.173 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4948], 00:34:04.173 | 99.99th=[ 5342] 00:34:04.173 bw ( KiB/s): min=21696, max=23872, per=27.12%, avg=22725.33, stdev=605.79, samples=9 00:34:04.173 iops : min= 2712, max= 2984, avg=2840.67, stdev=75.72, samples=9 00:34:04.173 lat (usec) : 1000=0.23% 00:34:04.173 lat (msec) : 2=2.06%, 4=96.54%, 10=1.18% 00:34:04.173 cpu : usr=95.94%, sys=3.74%, ctx=7, majf=0, minf=9 00:34:04.173 IO depths : 1=0.3%, 2=6.9%, 4=63.5%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.173 issued rwts: total=14205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:04.173 00:34:04.173 Run status group 0 (all jobs): 00:34:04.173 READ: bw=81.8MiB/s (85.8MB/s), 19.4MiB/s-22.2MiB/s (20.4MB/s-23.3MB/s), io=409MiB (429MB), run=5001-5001msec 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.173 00:34:04.173 real 0m24.271s 00:34:04.173 user 4m52.051s 00:34:04.173 sys 0m5.232s 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.173 16:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.173 ************************************ 00:34:04.173 END TEST fio_dif_rand_params 00:34:04.173 ************************************ 00:34:04.173 16:27:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:04.173 16:27:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:04.173 16:27:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.173 16:27:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:04.173 ************************************ 00:34:04.173 START TEST fio_dif_digest 00:34:04.173 ************************************ 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:04.173 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 bdev_null0 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.174 [2024-11-20 16:27:04.605447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.174 { 00:34:04.174 "params": { 00:34:04.174 "name": "Nvme$subsystem", 00:34:04.174 "trtype": "$TEST_TRANSPORT", 00:34:04.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.174 "adrfam": "ipv4", 00:34:04.174 "trsvcid": "$NVMF_PORT", 00:34:04.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.174 "hdgst": ${hdgst:-false}, 00:34:04.174 "ddgst": ${ddgst:-false} 00:34:04.174 }, 00:34:04.174 "method": "bdev_nvme_attach_controller" 00:34:04.174 } 00:34:04.174 EOF 00:34:04.174 )") 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:04.174 "params": { 00:34:04.174 "name": "Nvme0", 00:34:04.174 "trtype": "tcp", 00:34:04.174 "traddr": "10.0.0.2", 00:34:04.174 "adrfam": "ipv4", 00:34:04.174 "trsvcid": "4420", 00:34:04.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:04.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:04.174 "hdgst": true, 00:34:04.174 "ddgst": true 00:34:04.174 }, 00:34:04.174 "method": "bdev_nvme_attach_controller" 00:34:04.174 }' 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:04.174 16:27:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.174 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:04.174 ... 00:34:04.174 fio-3.35 00:34:04.174 Starting 3 threads 00:34:16.383 00:34:16.383 filename0: (groupid=0, jobs=1): err= 0: pid=2998452: Wed Nov 20 16:27:15 2024 00:34:16.383 read: IOPS=277, BW=34.6MiB/s (36.3MB/s)(348MiB/10044msec) 00:34:16.383 slat (nsec): min=6694, max=42395, avg=17379.04, stdev=7495.98 00:34:16.383 clat (usec): min=6031, max=49447, avg=10792.55, stdev=1272.63 00:34:16.383 lat (usec): min=6044, max=49460, avg=10809.93, stdev=1272.47 00:34:16.383 clat percentiles (usec): 00:34:16.383 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:34:16.383 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:34:16.383 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:16.383 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13698], 99.95th=[43779], 00:34:16.383 | 99.99th=[49546] 00:34:16.383 bw ( KiB/s): min=33280, max=38400, per=33.17%, avg=35596.80, stdev=1255.44, samples=20 00:34:16.383 iops : min= 260, max= 300, avg=278.10, stdev= 9.81, samples=20 00:34:16.383 lat (msec) : 10=15.95%, 20=83.97%, 50=0.07% 00:34:16.383 cpu : usr=93.96%, sys=4.27%, ctx=658, majf=0, minf=52 00:34:16.383 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.383 issued rwts: total=2783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.383 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:16.383 filename0: (groupid=0, jobs=1): err= 0: pid=2998454: Wed Nov 20 16:27:15 2024 00:34:16.383 read: IOPS=294, BW=36.9MiB/s (38.7MB/s)(370MiB/10045msec) 00:34:16.383 slat (nsec): min=6503, max=78788, avg=17634.53, stdev=6660.84 00:34:16.383 clat (usec): min=7668, max=51596, avg=10136.18, stdev=1806.05 00:34:16.383 lat (usec): min=7678, max=51620, avg=10153.81, stdev=1806.37 00:34:16.383 clat percentiles (usec): 00:34:16.383 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9503], 00:34:16.383 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:34:16.383 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:34:16.383 | 99.00th=[11863], 99.50th=[12256], 99.90th=[51119], 99.95th=[51643], 00:34:16.383 | 99.99th=[51643] 00:34:16.383 bw ( KiB/s): min=34816, max=39680, per=35.32%, avg=37900.80, stdev=1121.97, samples=20 00:34:16.383 iops : min= 272, max= 310, avg=296.10, stdev= 8.77, samples=20 00:34:16.383 lat (msec) : 10=46.37%, 20=53.46%, 50=0.03%, 100=0.13% 00:34:16.383 cpu : usr=96.08%, sys=3.59%, ctx=37, majf=0, minf=56 00:34:16.383 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.383 issued rwts: total=2963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:16.384 filename0: (groupid=0, jobs=1): err= 0: pid=2998455: Wed Nov 20 16:27:15 2024 00:34:16.384 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(334MiB/10043msec) 00:34:16.384 slat (nsec): min=6555, max=43472, avg=15699.28, stdev=7000.83 00:34:16.384 clat (usec): min=7115, max=48692, avg=11230.52, stdev=1277.79 00:34:16.384 lat (usec): min=7128, max=48701, avg=11246.22, stdev=1277.86 00:34:16.384 clat percentiles (usec): 00:34:16.384 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:34:16.384 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:34:16.384 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:16.384 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14484], 99.95th=[44827], 00:34:16.384 | 99.99th=[48497] 00:34:16.384 bw ( KiB/s): min=32768, max=35328, per=31.88%, avg=34214.40, stdev=797.85, samples=20 00:34:16.384 iops : min= 256, max= 276, avg=267.30, stdev= 6.23, samples=20 00:34:16.384 lat (msec) : 10=5.35%, 20=94.58%, 50=0.07% 00:34:16.384 cpu : usr=96.83%, sys=2.86%, ctx=14, majf=0, minf=79 00:34:16.384 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.384 issued rwts: total=2675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:16.384 00:34:16.384 Run status group 0 (all jobs): 00:34:16.384 READ: bw=105MiB/s (110MB/s), 33.3MiB/s-36.9MiB/s (34.9MB/s-38.7MB/s), io=1053MiB (1104MB), run=10043-10045msec 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.384 00:34:16.384 real 0m11.278s 00:34:16.384 user 0m35.815s 00:34:16.384 sys 0m1.442s 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.384 16:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:16.384 ************************************ 00:34:16.384 END TEST fio_dif_digest 00:34:16.384 ************************************ 00:34:16.384 16:27:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:16.384 16:27:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.384 rmmod nvme_tcp 00:34:16.384 rmmod nvme_fabrics 00:34:16.384 rmmod nvme_keyring 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2989839 ']' 00:34:16.384 16:27:15 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2989839 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2989839 ']' 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2989839 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989839 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989839' 00:34:16.384 killing process with pid 2989839 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2989839 00:34:16.384 16:27:15 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2989839 00:34:16.384 16:27:16 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:16.384 16:27:16 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:18.289 Waiting for block devices as requested 00:34:18.289 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:18.289 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:18.289 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.548 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.548 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.548 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:18.548 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:18.807 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:18.807 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:18.807 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:19.066 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:19.066 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:19.066 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:19.326 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.326 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.326 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.326 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:19.585 16:27:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.585 16:27:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:19.585 16:27:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.491 16:27:22 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:21.491 00:34:21.491 real 1m14.335s 00:34:21.491 user 7m9.956s 00:34:21.491 sys 0m20.665s 00:34:21.491 16:27:22 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.491 16:27:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:21.491 ************************************ 00:34:21.491 END TEST nvmf_dif 00:34:21.491 ************************************ 00:34:21.750 16:27:22 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:21.750 16:27:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:21.750 16:27:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.750 16:27:22 -- common/autotest_common.sh@10 -- # set +x 00:34:21.750 ************************************ 00:34:21.750 START TEST nvmf_abort_qd_sizes 00:34:21.750 ************************************ 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:21.750 * Looking for test storage... 00:34:21.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.750 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.751 --rc genhtml_branch_coverage=1 00:34:21.751 --rc genhtml_function_coverage=1 00:34:21.751 --rc genhtml_legend=1 00:34:21.751 --rc geninfo_all_blocks=1 00:34:21.751 --rc geninfo_unexecuted_blocks=1 00:34:21.751 00:34:21.751 ' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.751 --rc genhtml_branch_coverage=1 00:34:21.751 --rc genhtml_function_coverage=1 00:34:21.751 --rc genhtml_legend=1 00:34:21.751 --rc geninfo_all_blocks=1 00:34:21.751 --rc geninfo_unexecuted_blocks=1 00:34:21.751 00:34:21.751 ' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.751 --rc genhtml_branch_coverage=1 00:34:21.751 --rc genhtml_function_coverage=1 00:34:21.751 --rc genhtml_legend=1 00:34:21.751 --rc geninfo_all_blocks=1 00:34:21.751 --rc geninfo_unexecuted_blocks=1 00:34:21.751 00:34:21.751 ' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.751 --rc genhtml_branch_coverage=1 00:34:21.751 --rc genhtml_function_coverage=1 00:34:21.751 --rc genhtml_legend=1 00:34:21.751 --rc geninfo_all_blocks=1 00:34:21.751 --rc geninfo_unexecuted_blocks=1 00:34:21.751 00:34:21.751 ' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.751 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.011 16:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:28.584 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:28.585 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:28.585 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:28.585 Found net devices under 0000:86:00.0: cvl_0_0 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:28.585 Found net devices under 0000:86:00.1: cvl_0_1 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:34:28.585 00:34:28.585 --- 10.0.0.2 ping statistics --- 00:34:28.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.585 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:34:28.585 00:34:28.585 --- 10.0.0.1 ping statistics --- 00:34:28.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.585 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:28.585 16:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:30.491 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:30.491 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:30.750 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:30.750 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:30.750 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:30.750 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:30.750 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:30.750 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:31.688 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3006760 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3006760 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3006760 ']' 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.688 16:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.688 [2024-11-20 16:27:32.417407] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:34:31.688 [2024-11-20 16:27:32.417453] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.689 [2024-11-20 16:27:32.499643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:31.946 [2024-11-20 16:27:32.544055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.947 [2024-11-20 16:27:32.544093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.947 [2024-11-20 16:27:32.544100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.947 [2024-11-20 16:27:32.544106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.947 [2024-11-20 16:27:32.544114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.947 [2024-11-20 16:27:32.545606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.947 [2024-11-20 16:27:32.545713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:31.947 [2024-11-20 16:27:32.545823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.947 [2024-11-20 16:27:32.545824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:32.511 16:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:32.512 16:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:32.512 16:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:32.512 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:32.512 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.512 16:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.512 ************************************ 00:34:32.512 START TEST spdk_target_abort 00:34:32.512 ************************************ 00:34:32.512 16:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:32.512 16:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:32.512 16:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:32.512 16:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.769 16:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.048 spdk_targetn1 00:34:36.048 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.048 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:36.048 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.048 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.048 [2024-11-20 16:27:36.180082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.048 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.049 [2024-11-20 16:27:36.220349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.049 16:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.339 Initializing NVMe Controllers 00:34:39.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.339 Initialization complete. Launching workers. 00:34:39.339 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15534, failed: 0 00:34:39.339 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1333, failed to submit 14201 00:34:39.339 success 745, unsuccessful 588, failed 0 00:34:39.339 16:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.339 16:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.617 Initializing NVMe Controllers 00:34:42.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:42.617 Initialization complete. Launching workers. 00:34:42.617 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8659, failed: 0 00:34:42.617 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7396 00:34:42.617 success 354, unsuccessful 909, failed 0 00:34:42.617 16:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.617 16:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.898 Initializing NVMe Controllers 00:34:45.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.898 Initialization complete. Launching workers. 00:34:45.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37882, failed: 0 00:34:45.898 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2864, failed to submit 35018 00:34:45.898 success 566, unsuccessful 2298, failed 0 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.898 16:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3006760 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3006760 ']' 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3006760 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006760 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006760' 00:34:46.832 killing process with pid 3006760 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3006760 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3006760 00:34:46.832 00:34:46.832 real 0m14.317s 00:34:46.832 user 0m57.065s 00:34:46.832 sys 0m2.601s 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.832 16:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:46.832 ************************************ 00:34:46.832 END TEST spdk_target_abort 00:34:46.832 ************************************ 00:34:47.092 16:27:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:47.092 16:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:47.092 16:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.092 16:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:47.092 ************************************ 00:34:47.092 START TEST kernel_target_abort 00:34:47.092 ************************************ 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:47.092 16:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:49.631 Waiting for block devices as requested 00:34:49.891 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:49.891 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:49.891 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.150 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.150 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.150 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.410 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.410 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:50.410 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:50.410 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:50.670 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.670 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.670 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.930 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.930 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.930 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:50.930 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:51.190 No valid GPT data, bailing 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:51.190 00:34:51.190 Discovery Log Number of Records 2, Generation counter 2 00:34:51.190 =====Discovery Log Entry 0====== 00:34:51.190 trtype: tcp 00:34:51.190 adrfam: ipv4 00:34:51.190 subtype: current discovery subsystem 00:34:51.190 treq: not specified, sq flow control disable supported 00:34:51.190 portid: 1 00:34:51.190 trsvcid: 4420 00:34:51.190 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:51.190 traddr: 10.0.0.1 00:34:51.190 eflags: none 00:34:51.190 sectype: none 00:34:51.190 =====Discovery Log Entry 1====== 00:34:51.190 trtype: tcp 00:34:51.190 adrfam: ipv4 00:34:51.190 subtype: nvme subsystem 00:34:51.190 treq: not specified, sq flow control disable supported 00:34:51.190 portid: 1 00:34:51.190 trsvcid: 4420 00:34:51.190 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:51.190 traddr: 10.0.0.1 00:34:51.190 eflags: none 00:34:51.190 sectype: none 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:51.190 16:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:54.481 Initializing NVMe Controllers 00:34:54.481 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:54.481 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:54.481 Initialization complete. Launching workers. 00:34:54.481 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92060, failed: 0 00:34:54.481 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92060, failed to submit 0 00:34:54.481 success 0, unsuccessful 92060, failed 0 00:34:54.481 16:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:54.481 16:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:57.768 Initializing NVMe Controllers 00:34:57.768 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:57.768 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:57.768 Initialization complete. Launching workers. 00:34:57.768 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145037, failed: 0 00:34:57.768 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36302, failed to submit 108735 00:34:57.768 success 0, unsuccessful 36302, failed 0 00:34:57.768 16:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:57.768 16:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:01.056 Initializing NVMe Controllers 00:35:01.056 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:01.056 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:01.056 Initialization complete. Launching workers. 00:35:01.056 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136024, failed: 0 00:35:01.056 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34062, failed to submit 101962 00:35:01.056 success 0, unsuccessful 34062, failed 0 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:01.056 16:28:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:03.655 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:03.655 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:04.367 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:04.627 00:35:04.627 real 0m17.491s 00:35:04.627 user 0m9.233s 00:35:04.627 sys 0m4.990s 00:35:04.627 16:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.627 16:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:04.627 ************************************ 00:35:04.627 END TEST kernel_target_abort 00:35:04.627 ************************************ 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.627 rmmod nvme_tcp 00:35:04.627 rmmod nvme_fabrics 00:35:04.627 rmmod nvme_keyring 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3006760 ']' 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3006760 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3006760 ']' 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3006760 00:35:04.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3006760) - No such process 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3006760 is not found' 00:35:04.627 Process with pid 3006760 is not found 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:04.627 16:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:07.162 Waiting for block devices as requested 00:35:07.421 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:07.421 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:07.680 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:07.680 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:07.680 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:07.680 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:07.940 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:07.940 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:07.940 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:08.199 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:08.199 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:08.199 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:08.199 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:08.458 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:08.458 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:08.458 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:08.458 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:08.717 16:28:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.622 16:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.622 00:35:10.622 real 0m49.066s 00:35:10.622 user 1m10.849s 00:35:10.622 sys 0m16.328s 00:35:10.622 16:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.622 16:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:10.622 ************************************ 00:35:10.622 END TEST nvmf_abort_qd_sizes 00:35:10.622 ************************************ 00:35:10.882 16:28:11 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:10.882 16:28:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:10.882 16:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.882 16:28:11 -- common/autotest_common.sh@10 -- # set +x 00:35:10.882 ************************************ 00:35:10.882 START TEST keyring_file 00:35:10.882 ************************************ 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:10.882 * Looking for test storage... 00:35:10.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.882 16:28:11 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.882 --rc genhtml_branch_coverage=1 00:35:10.882 --rc genhtml_function_coverage=1 00:35:10.882 --rc genhtml_legend=1 00:35:10.882 --rc geninfo_all_blocks=1 00:35:10.882 --rc geninfo_unexecuted_blocks=1 00:35:10.882 00:35:10.882 ' 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.882 --rc genhtml_branch_coverage=1 00:35:10.882 --rc genhtml_function_coverage=1 00:35:10.882 --rc genhtml_legend=1 00:35:10.882 --rc geninfo_all_blocks=1 00:35:10.882 --rc geninfo_unexecuted_blocks=1 00:35:10.882 00:35:10.882 ' 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.882 --rc genhtml_branch_coverage=1 00:35:10.882 --rc genhtml_function_coverage=1 00:35:10.882 --rc genhtml_legend=1 00:35:10.882 --rc geninfo_all_blocks=1 00:35:10.882 --rc geninfo_unexecuted_blocks=1 00:35:10.882 00:35:10.882 ' 00:35:10.882 16:28:11 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:10.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.882 --rc genhtml_branch_coverage=1 00:35:10.882 --rc genhtml_function_coverage=1 00:35:10.882 --rc genhtml_legend=1 00:35:10.882 --rc geninfo_all_blocks=1 00:35:10.882 --rc geninfo_unexecuted_blocks=1 00:35:10.882 00:35:10.882 ' 00:35:10.882 16:28:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:10.882 16:28:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.882 16:28:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.142 16:28:11 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.142 16:28:11 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.142 16:28:11 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.142 16:28:11 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.142 16:28:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.142 16:28:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.142 16:28:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.142 16:28:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:11.142 16:28:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SGN5uzUcFQ 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SGN5uzUcFQ 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SGN5uzUcFQ 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SGN5uzUcFQ 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2lNdP7gwFE 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:11.142 16:28:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2lNdP7gwFE 00:35:11.142 16:28:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2lNdP7gwFE 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.2lNdP7gwFE 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=3015545 00:35:11.142 16:28:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3015545 00:35:11.143 16:28:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:11.143 16:28:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3015545 ']' 00:35:11.143 16:28:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.143 16:28:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.143 16:28:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.143 16:28:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.143 16:28:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:11.143 [2024-11-20 16:28:11.894266] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:35:11.143 [2024-11-20 16:28:11.894318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015545 ] 00:35:11.143 [2024-11-20 16:28:11.969492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.401 [2024-11-20 16:28:12.009877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.401 16:28:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.401 16:28:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:11.401 16:28:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:11.401 16:28:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.401 16:28:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:11.401 [2024-11-20 16:28:12.228899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.661 null0 00:35:11.661 [2024-11-20 16:28:12.260960] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:11.661 [2024-11-20 16:28:12.261318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.661 16:28:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:11.661 [2024-11-20 16:28:12.293035] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:11.661 request: 00:35:11.661 { 00:35:11.661 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.661 "secure_channel": false, 00:35:11.661 "listen_address": { 00:35:11.661 "trtype": "tcp", 00:35:11.661 "traddr": "127.0.0.1", 00:35:11.661 "trsvcid": "4420" 00:35:11.661 }, 00:35:11.661 "method": "nvmf_subsystem_add_listener", 00:35:11.661 "req_id": 1 00:35:11.661 } 00:35:11.661 Got JSON-RPC error response 00:35:11.661 response: 00:35:11.661 { 00:35:11.661 "code": -32602, 00:35:11.661 "message": "Invalid parameters" 00:35:11.661 } 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:11.661 16:28:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=3015549 00:35:11.661 16:28:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3015549 /var/tmp/bperf.sock 00:35:11.661 16:28:12 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3015549 ']' 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.661 16:28:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:11.661 [2024-11-20 16:28:12.348540] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:35:11.661 [2024-11-20 16:28:12.348582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015549 ] 00:35:11.661 [2024-11-20 16:28:12.424385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.661 [2024-11-20 16:28:12.465805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.920 16:28:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.920 16:28:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:11.920 16:28:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:11.920 16:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:12.179 16:28:12 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2lNdP7gwFE 00:35:12.179 16:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2lNdP7gwFE 00:35:12.179 16:28:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:12.179 16:28:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:12.179 16:28:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.179 16:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.179 16:28:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.438 16:28:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SGN5uzUcFQ == \/\t\m\p\/\t\m\p\.\S\G\N\5\u\z\U\c\F\Q ]] 00:35:12.438 16:28:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:12.438 16:28:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:12.438 16:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.438 16:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.438 16:28:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.697 16:28:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.2lNdP7gwFE == \/\t\m\p\/\t\m\p\.\2\l\N\d\P\7\g\w\F\E ]] 00:35:12.697 16:28:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:12.697 16:28:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.697 16:28:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.697 16:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.697 16:28:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.697 16:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.697 16:28:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:12.956 16:28:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:12.956 16:28:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:12.956 16:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.956 16:28:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.956 16:28:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.956 16:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.956 16:28:13 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:12.956 16:28:13 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.956 16:28:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.214 [2024-11-20 16:28:13.905400] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:13.214 nvme0n1 00:35:13.214 16:28:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:13.215 16:28:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.215 16:28:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.215 16:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.215 16:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.215 16:28:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.473 16:28:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:13.473 16:28:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:13.473 16:28:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:13.473 16:28:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.473 16:28:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:13.473 16:28:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.473 16:28:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.732 16:28:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:13.732 16:28:14 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.732 Running I/O for 1 seconds... 00:35:14.668 18654.00 IOPS, 72.87 MiB/s 00:35:14.668 Latency(us) 00:35:14.668 [2024-11-20T15:28:15.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.668 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:14.668 nvme0n1 : 1.00 18702.80 73.06 0.00 0.00 6831.28 2863.64 18008.15 00:35:14.668 [2024-11-20T15:28:15.505Z] =================================================================================================================== 00:35:14.668 [2024-11-20T15:28:15.505Z] Total : 18702.80 73.06 0.00 0.00 6831.28 2863.64 18008.15 00:35:14.668 { 00:35:14.668 "results": [ 00:35:14.668 { 00:35:14.668 "job": "nvme0n1", 00:35:14.668 "core_mask": "0x2", 00:35:14.668 "workload": "randrw", 00:35:14.668 "percentage": 50, 00:35:14.668 "status": "finished", 00:35:14.668 "queue_depth": 128, 00:35:14.668 "io_size": 4096, 00:35:14.668 "runtime": 1.004288, 00:35:14.668 "iops": 18702.802383380065, 00:35:14.668 "mibps": 73.05782181007838, 00:35:14.668 "io_failed": 0, 00:35:14.668 "io_timeout": 0, 00:35:14.668 "avg_latency_us": 6831.282155695831, 00:35:14.668 "min_latency_us": 2863.6382608695653, 00:35:14.668 "max_latency_us": 18008.15304347826 00:35:14.668 } 00:35:14.668 ], 00:35:14.668 "core_count": 1 00:35:14.668 } 00:35:14.927 16:28:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:14.927 16:28:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:14.927 16:28:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:14.927 16:28:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.927 16:28:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.927 16:28:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.927 16:28:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.927 16:28:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.185 16:28:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:15.185 16:28:15 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:15.185 16:28:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:15.185 16:28:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.185 16:28:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.185 16:28:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.185 16:28:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.443 16:28:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:15.443 16:28:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.443 16:28:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:15.443 16:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:15.702 [2024-11-20 16:28:16.284752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:15.702 [2024-11-20 16:28:16.285462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aa1f0 (107): Transport endpoint is not connected 00:35:15.702 [2024-11-20 16:28:16.286456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aa1f0 (9): Bad file descriptor 00:35:15.702 [2024-11-20 16:28:16.287457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:15.702 [2024-11-20 16:28:16.287467] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:15.702 [2024-11-20 16:28:16.287474] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:15.702 [2024-11-20 16:28:16.287482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:15.702 request: 00:35:15.702 { 00:35:15.702 "name": "nvme0", 00:35:15.702 "trtype": "tcp", 00:35:15.702 "traddr": "127.0.0.1", 00:35:15.702 "adrfam": "ipv4", 00:35:15.702 "trsvcid": "4420", 00:35:15.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.702 "prchk_reftag": false, 00:35:15.702 "prchk_guard": false, 00:35:15.702 "hdgst": false, 00:35:15.702 "ddgst": false, 00:35:15.702 "psk": "key1", 00:35:15.702 "allow_unrecognized_csi": false, 00:35:15.702 "method": "bdev_nvme_attach_controller", 00:35:15.702 "req_id": 1 00:35:15.702 } 00:35:15.702 Got JSON-RPC error response 00:35:15.702 response: 00:35:15.702 { 00:35:15.702 "code": -5, 00:35:15.702 "message": "Input/output error" 00:35:15.702 } 00:35:15.702 16:28:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:15.702 16:28:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.702 16:28:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.702 16:28:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.702 16:28:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.702 16:28:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:15.702 16:28:16 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.702 16:28:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.961 16:28:16 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:15.961 16:28:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:15.961 16:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:16.219 16:28:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:16.219 16:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:16.478 16:28:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:16.478 16:28:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:16.478 16:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.478 16:28:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:16.478 16:28:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.SGN5uzUcFQ 00:35:16.478 16:28:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.478 16:28:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:16.478 16:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:16.737 [2024-11-20 16:28:17.476598] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SGN5uzUcFQ': 0100660 00:35:16.737 [2024-11-20 16:28:17.476625] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:16.737 request: 00:35:16.737 { 00:35:16.737 "name": "key0", 00:35:16.737 "path": "/tmp/tmp.SGN5uzUcFQ", 00:35:16.737 "method": "keyring_file_add_key", 00:35:16.737 "req_id": 1 00:35:16.737 } 00:35:16.737 Got JSON-RPC error response 00:35:16.737 response: 00:35:16.737 { 00:35:16.737 "code": -1, 00:35:16.737 "message": "Operation not permitted" 00:35:16.737 } 00:35:16.737 16:28:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:16.737 16:28:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.737 16:28:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.737 16:28:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.737 16:28:17 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.SGN5uzUcFQ 00:35:16.737 16:28:17 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:16.737 16:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SGN5uzUcFQ 00:35:16.996 16:28:17 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.SGN5uzUcFQ 00:35:16.996 16:28:17 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:16.996 16:28:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.996 16:28:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.996 16:28:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.996 16:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.996 16:28:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:17.256 16:28:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:17.256 16:28:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.256 16:28:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.256 16:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.256 [2024-11-20 16:28:18.062165] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SGN5uzUcFQ': No such file or directory 00:35:17.256 [2024-11-20 16:28:18.062186] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:17.256 [2024-11-20 16:28:18.062201] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:17.256 [2024-11-20 16:28:18.062209] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:17.256 [2024-11-20 16:28:18.062216] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:17.256 [2024-11-20 16:28:18.062222] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:17.256 request: 00:35:17.256 { 00:35:17.256 "name": "nvme0", 00:35:17.256 "trtype": "tcp", 00:35:17.256 "traddr": "127.0.0.1", 00:35:17.256 "adrfam": "ipv4", 00:35:17.256 "trsvcid": "4420", 00:35:17.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.256 "prchk_reftag": false, 00:35:17.256 "prchk_guard": false, 00:35:17.256 "hdgst": false, 00:35:17.256 "ddgst": false, 00:35:17.256 "psk": "key0", 00:35:17.256 "allow_unrecognized_csi": false, 00:35:17.256 "method": "bdev_nvme_attach_controller", 00:35:17.256 "req_id": 1 00:35:17.256 } 00:35:17.256 Got JSON-RPC error response 00:35:17.256 response: 00:35:17.256 { 00:35:17.256 "code": -19, 00:35:17.256 "message": "No such device" 00:35:17.256 } 00:35:17.256 16:28:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:17.256 16:28:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.256 16:28:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.256 16:28:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.256 16:28:18 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:17.256 16:28:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:17.516 16:28:18 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JP3A4Mb0Bc 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:17.516 16:28:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:17.516 16:28:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.516 16:28:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.516 16:28:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:17.516 16:28:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:17.516 16:28:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JP3A4Mb0Bc 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JP3A4Mb0Bc 00:35:17.516 16:28:18 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.JP3A4Mb0Bc 00:35:17.516 16:28:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JP3A4Mb0Bc 00:35:17.516 16:28:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JP3A4Mb0Bc 00:35:17.774 16:28:18 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.774 16:28:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.033 nvme0n1 00:35:18.033 16:28:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:18.033 16:28:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.033 16:28:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.033 16:28:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.033 16:28:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.033 16:28:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.291 16:28:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:18.291 16:28:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:18.291 16:28:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:18.549 16:28:19 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:18.549 16:28:19 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.549 16:28:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:18.549 16:28:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.549 16:28:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.807 16:28:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:18.807 16:28:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:18.807 16:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:19.066 16:28:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:19.066 16:28:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:19.066 16:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.324 16:28:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:19.324 16:28:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JP3A4Mb0Bc 00:35:19.324 16:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JP3A4Mb0Bc 00:35:19.324 16:28:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2lNdP7gwFE 00:35:19.324 16:28:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2lNdP7gwFE 00:35:19.583 16:28:20 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:19.583 16:28:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:19.841 nvme0n1 00:35:19.841 16:28:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:19.841 16:28:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:20.101 16:28:20 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:20.101 "subsystems": [ 00:35:20.101 { 00:35:20.101 "subsystem": "keyring", 00:35:20.101 "config": [ 00:35:20.101 { 00:35:20.101 "method": "keyring_file_add_key", 00:35:20.101 "params": { 00:35:20.101 "name": "key0", 00:35:20.101 "path": "/tmp/tmp.JP3A4Mb0Bc" 00:35:20.101 } 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "method": "keyring_file_add_key", 00:35:20.101 "params": { 00:35:20.101 "name": "key1", 00:35:20.101 "path": "/tmp/tmp.2lNdP7gwFE" 00:35:20.101 } 00:35:20.101 } 00:35:20.101 ] 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "subsystem": "iobuf", 00:35:20.101 "config": [ 00:35:20.101 { 00:35:20.101 "method": "iobuf_set_options", 00:35:20.101 "params": { 00:35:20.101 "small_pool_count": 8192, 00:35:20.101 "large_pool_count": 1024, 00:35:20.101 "small_bufsize": 8192, 00:35:20.101 "large_bufsize": 135168, 00:35:20.101 "enable_numa": false 00:35:20.101 } 00:35:20.101 } 00:35:20.101 ] 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "subsystem": "sock", 00:35:20.101 "config": [ 00:35:20.101 { 00:35:20.101 "method": "sock_set_default_impl", 00:35:20.101 "params": { 00:35:20.101 "impl_name": "posix" 00:35:20.101 } 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "method": "sock_impl_set_options", 00:35:20.101 "params": { 00:35:20.101 "impl_name": "ssl", 00:35:20.101 "recv_buf_size": 4096, 00:35:20.101 "send_buf_size": 4096, 00:35:20.101 "enable_recv_pipe": true, 00:35:20.101 "enable_quickack": false, 00:35:20.101 "enable_placement_id": 0, 00:35:20.101 "enable_zerocopy_send_server": true, 00:35:20.101 "enable_zerocopy_send_client": false, 00:35:20.101 "zerocopy_threshold": 0, 00:35:20.101 "tls_version": 0, 00:35:20.101 "enable_ktls": false 00:35:20.101 } 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "method": "sock_impl_set_options", 00:35:20.101 "params": { 00:35:20.101 "impl_name": "posix", 00:35:20.101 "recv_buf_size": 2097152, 00:35:20.101 "send_buf_size": 2097152, 00:35:20.101 "enable_recv_pipe": true, 00:35:20.101 "enable_quickack": false, 00:35:20.101 "enable_placement_id": 0, 00:35:20.101 "enable_zerocopy_send_server": true, 00:35:20.101 "enable_zerocopy_send_client": false, 00:35:20.101 "zerocopy_threshold": 0, 00:35:20.101 "tls_version": 0, 00:35:20.101 "enable_ktls": false 00:35:20.101 } 00:35:20.101 } 00:35:20.101 ] 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "subsystem": "vmd", 00:35:20.101 "config": [] 00:35:20.101 }, 00:35:20.101 { 00:35:20.101 "subsystem": "accel", 00:35:20.101 "config": [ 00:35:20.101 { 00:35:20.101 "method": "accel_set_options", 00:35:20.101 "params": { 00:35:20.101 "small_cache_size": 128, 00:35:20.101 "large_cache_size": 16, 00:35:20.101 "task_count": 2048, 00:35:20.102 "sequence_count": 2048, 00:35:20.102 "buf_count": 2048 00:35:20.102 } 00:35:20.102 } 00:35:20.102 ] 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "subsystem": "bdev", 00:35:20.102 "config": [ 00:35:20.102 { 00:35:20.102 "method": "bdev_set_options", 00:35:20.102 "params": { 00:35:20.102 "bdev_io_pool_size": 65535, 00:35:20.102 "bdev_io_cache_size": 256, 00:35:20.102 "bdev_auto_examine": true, 00:35:20.102 "iobuf_small_cache_size": 128, 00:35:20.102 "iobuf_large_cache_size": 16 00:35:20.102 } 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "method": "bdev_raid_set_options", 00:35:20.102 "params": { 00:35:20.102 "process_window_size_kb": 1024, 00:35:20.102 "process_max_bandwidth_mb_sec": 0 00:35:20.102 } 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "method": "bdev_iscsi_set_options", 00:35:20.102 "params": { 00:35:20.102 "timeout_sec": 30 00:35:20.102 } 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "method": "bdev_nvme_set_options", 00:35:20.102 "params": { 00:35:20.102 "action_on_timeout": "none", 00:35:20.102 "timeout_us": 0, 00:35:20.102 "timeout_admin_us": 0, 00:35:20.102 "keep_alive_timeout_ms": 10000, 00:35:20.102 "arbitration_burst": 0, 00:35:20.102 "low_priority_weight": 0, 00:35:20.102 "medium_priority_weight": 0, 00:35:20.102 "high_priority_weight": 0, 00:35:20.102 "nvme_adminq_poll_period_us": 10000, 00:35:20.102 "nvme_ioq_poll_period_us": 0, 00:35:20.102 "io_queue_requests": 512, 00:35:20.102 "delay_cmd_submit": true, 00:35:20.102 "transport_retry_count": 4, 00:35:20.102 "bdev_retry_count": 3, 00:35:20.102 "transport_ack_timeout": 0, 00:35:20.102 "ctrlr_loss_timeout_sec": 0, 00:35:20.102 "reconnect_delay_sec": 0, 00:35:20.102 "fast_io_fail_timeout_sec": 0, 00:35:20.102 "disable_auto_failback": false, 00:35:20.102 "generate_uuids": false, 00:35:20.102 "transport_tos": 0, 00:35:20.102 "nvme_error_stat": false, 00:35:20.102 "rdma_srq_size": 0, 00:35:20.102 "io_path_stat": false, 00:35:20.102 "allow_accel_sequence": false, 00:35:20.102 "rdma_max_cq_size": 0, 00:35:20.102 "rdma_cm_event_timeout_ms": 0, 00:35:20.102 "dhchap_digests": [ 00:35:20.102 "sha256", 00:35:20.102 "sha384", 00:35:20.102 "sha512" 00:35:20.102 ], 00:35:20.102 "dhchap_dhgroups": [ 00:35:20.102 "null", 00:35:20.102 "ffdhe2048", 00:35:20.102 "ffdhe3072", 00:35:20.102 "ffdhe4096", 00:35:20.102 "ffdhe6144", 00:35:20.102 "ffdhe8192" 00:35:20.102 ] 00:35:20.102 } 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "method": "bdev_nvme_attach_controller", 00:35:20.102 "params": { 00:35:20.102 "name": "nvme0", 00:35:20.102 "trtype": "TCP", 00:35:20.102 "adrfam": "IPv4", 00:35:20.102 "traddr": "127.0.0.1", 00:35:20.102 "trsvcid": "4420", 00:35:20.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.102 "prchk_reftag": false, 00:35:20.102 "prchk_guard": false, 00:35:20.102 "ctrlr_loss_timeout_sec": 0, 00:35:20.102 "reconnect_delay_sec": 0, 00:35:20.102 "fast_io_fail_timeout_sec": 0, 00:35:20.102 "psk": "key0", 00:35:20.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.102 "hdgst": false, 00:35:20.102 "ddgst": false, 00:35:20.102 "multipath": "multipath" 00:35:20.102 } 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "method": "bdev_nvme_set_hotplug", 00:35:20.102 "params": { 00:35:20.102 "period_us": 100000, 00:35:20.102 "enable": false 00:35:20.102 } 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "method": "bdev_wait_for_examine" 00:35:20.102 } 00:35:20.102 ] 00:35:20.102 }, 00:35:20.102 { 00:35:20.102 "subsystem": "nbd", 00:35:20.102 "config": [] 00:35:20.102 } 00:35:20.102 ] 00:35:20.102 }' 00:35:20.102 16:28:20 keyring_file -- keyring/file.sh@115 -- # killprocess 3015549 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3015549 ']' 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3015549 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015549 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015549' 00:35:20.102 killing process with pid 3015549 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@973 -- # kill 3015549 00:35:20.102 Received shutdown signal, test time was about 1.000000 seconds 00:35:20.102 00:35:20.102 Latency(us) 00:35:20.102 [2024-11-20T15:28:20.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.102 [2024-11-20T15:28:20.939Z] =================================================================================================================== 00:35:20.102 [2024-11-20T15:28:20.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.102 16:28:20 keyring_file -- common/autotest_common.sh@978 -- # wait 3015549 00:35:20.361 16:28:21 keyring_file -- keyring/file.sh@118 -- # bperfpid=3017068 00:35:20.361 16:28:21 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3017068 /var/tmp/bperf.sock 00:35:20.361 16:28:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3017068 ']' 00:35:20.361 16:28:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.361 16:28:21 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:20.361 16:28:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.361 16:28:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.361 16:28:21 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:20.361 "subsystems": [ 00:35:20.361 { 00:35:20.361 "subsystem": "keyring", 00:35:20.361 "config": [ 00:35:20.361 { 00:35:20.361 "method": "keyring_file_add_key", 00:35:20.361 "params": { 00:35:20.361 "name": "key0", 00:35:20.361 "path": "/tmp/tmp.JP3A4Mb0Bc" 00:35:20.361 } 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "method": "keyring_file_add_key", 00:35:20.361 "params": { 00:35:20.361 "name": "key1", 00:35:20.361 "path": "/tmp/tmp.2lNdP7gwFE" 00:35:20.361 } 00:35:20.361 } 00:35:20.361 ] 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "subsystem": "iobuf", 00:35:20.361 "config": [ 00:35:20.361 { 00:35:20.361 "method": "iobuf_set_options", 00:35:20.361 "params": { 00:35:20.361 "small_pool_count": 8192, 00:35:20.361 "large_pool_count": 1024, 00:35:20.361 "small_bufsize": 8192, 00:35:20.361 "large_bufsize": 135168, 00:35:20.361 "enable_numa": false 00:35:20.361 } 00:35:20.361 } 00:35:20.361 ] 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "subsystem": "sock", 00:35:20.361 "config": [ 00:35:20.361 { 00:35:20.361 "method": "sock_set_default_impl", 00:35:20.361 "params": { 00:35:20.361 "impl_name": "posix" 00:35:20.361 } 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "method": "sock_impl_set_options", 00:35:20.361 "params": { 00:35:20.361 "impl_name": "ssl", 00:35:20.361 "recv_buf_size": 4096, 00:35:20.361 "send_buf_size": 4096, 00:35:20.361 "enable_recv_pipe": true, 00:35:20.361 "enable_quickack": false, 00:35:20.361 "enable_placement_id": 0, 00:35:20.361 "enable_zerocopy_send_server": true, 00:35:20.361 "enable_zerocopy_send_client": false, 00:35:20.361 "zerocopy_threshold": 0, 00:35:20.361 "tls_version": 0, 00:35:20.361 "enable_ktls": false 00:35:20.361 } 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "method": "sock_impl_set_options", 00:35:20.361 "params": { 00:35:20.361 "impl_name": "posix", 00:35:20.361 "recv_buf_size": 2097152, 00:35:20.361 "send_buf_size": 2097152, 00:35:20.361 "enable_recv_pipe": true, 00:35:20.361 "enable_quickack": false, 00:35:20.361 "enable_placement_id": 0, 00:35:20.361 "enable_zerocopy_send_server": true, 00:35:20.361 "enable_zerocopy_send_client": false, 00:35:20.361 "zerocopy_threshold": 0, 00:35:20.361 "tls_version": 0, 00:35:20.361 "enable_ktls": false 00:35:20.361 } 00:35:20.361 } 00:35:20.361 ] 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "subsystem": "vmd", 00:35:20.361 "config": [] 00:35:20.361 }, 00:35:20.361 { 00:35:20.361 "subsystem": "accel", 00:35:20.361 "config": [ 00:35:20.361 { 00:35:20.361 "method": "accel_set_options", 00:35:20.361 "params": { 00:35:20.361 "small_cache_size": 128, 00:35:20.362 "large_cache_size": 16, 00:35:20.362 "task_count": 2048, 00:35:20.362 "sequence_count": 2048, 00:35:20.362 "buf_count": 2048 00:35:20.362 } 00:35:20.362 } 00:35:20.362 ] 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "subsystem": "bdev", 00:35:20.362 "config": [ 00:35:20.362 { 00:35:20.362 "method": "bdev_set_options", 00:35:20.362 "params": { 00:35:20.362 "bdev_io_pool_size": 65535, 00:35:20.362 "bdev_io_cache_size": 256, 00:35:20.362 "bdev_auto_examine": true, 00:35:20.362 "iobuf_small_cache_size": 128, 00:35:20.362 "iobuf_large_cache_size": 16 00:35:20.362 } 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "method": "bdev_raid_set_options", 00:35:20.362 "params": { 00:35:20.362 "process_window_size_kb": 1024, 00:35:20.362 "process_max_bandwidth_mb_sec": 0 00:35:20.362 } 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "method": "bdev_iscsi_set_options", 00:35:20.362 "params": { 00:35:20.362 "timeout_sec": 30 00:35:20.362 } 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "method": "bdev_nvme_set_options", 00:35:20.362 "params": { 00:35:20.362 "action_on_timeout": "none", 00:35:20.362 "timeout_us": 0, 00:35:20.362 "timeout_admin_us": 0, 00:35:20.362 "keep_alive_timeout_ms": 10000, 00:35:20.362 "arbitration_burst": 0, 00:35:20.362 "low_priority_weight": 0, 00:35:20.362 "medium_priority_weight": 0, 00:35:20.362 "high_priority_weight": 0, 00:35:20.362 "nvme_adminq_poll_period_us": 10000, 00:35:20.362 "nvme_ioq_poll_period_us": 0, 00:35:20.362 "io_queue_requests": 512, 00:35:20.362 "delay_cmd_submit": true, 00:35:20.362 "transport_retry_count": 4, 00:35:20.362 "bdev_retry_count": 3, 00:35:20.362 "transport_ack_timeout": 0, 00:35:20.362 "ctrlr_loss_timeout_sec": 0, 00:35:20.362 "reconnect_delay_sec": 0, 00:35:20.362 "fast_io_fail_timeout_sec": 0, 00:35:20.362 "disable_auto_failback": false, 00:35:20.362 "generate_uuids": false, 00:35:20.362 "transport_tos": 0, 00:35:20.362 "nvme_error_stat": false, 00:35:20.362 "rdma_srq_size": 0, 00:35:20.362 "io_path_stat": false, 00:35:20.362 "allow_accel_sequence": false, 00:35:20.362 "rdma_max_cq_size": 0, 00:35:20.362 "rdma_cm_event_timeout_ms": 0, 00:35:20.362 "dhchap_digests": [ 00:35:20.362 "sha256", 00:35:20.362 "sha384", 00:35:20.362 "sha512" 00:35:20.362 ], 00:35:20.362 "dhchap_dhgroups": [ 00:35:20.362 "null", 00:35:20.362 "ffdhe2048", 00:35:20.362 "ffdhe3072", 00:35:20.362 "ffdhe4096", 00:35:20.362 "ffdhe6144", 00:35:20.362 "ffdhe8192" 00:35:20.362 ] 00:35:20.362 } 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "method": "bdev_nvme_attach_controller", 00:35:20.362 "params": { 00:35:20.362 "name": "nvme0", 00:35:20.362 "trtype": "TCP", 00:35:20.362 "adrfam": "IPv4", 00:35:20.362 "traddr": "127.0.0.1", 00:35:20.362 "trsvcid": "4420", 00:35:20.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.362 "prchk_reftag": false, 00:35:20.362 "prchk_guard": false, 00:35:20.362 "ctrlr_loss_timeout_sec": 0, 00:35:20.362 "reconnect_delay_sec": 0, 00:35:20.362 "fast_io_fail_timeout_sec": 0, 00:35:20.362 "psk": "key0", 00:35:20.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.362 "hdgst": false, 00:35:20.362 "ddgst": false, 00:35:20.362 "multipath": "multipath" 00:35:20.362 } 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "method": "bdev_nvme_set_hotplug", 00:35:20.362 "params": { 00:35:20.362 "period_us": 100000, 00:35:20.362 "enable": false 00:35:20.362 } 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "method": "bdev_wait_for_examine" 00:35:20.362 } 00:35:20.362 ] 00:35:20.362 }, 00:35:20.362 { 00:35:20.362 "subsystem": "nbd", 00:35:20.362 "config": [] 00:35:20.362 } 00:35:20.362 ] 00:35:20.362 }' 00:35:20.362 16:28:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.362 16:28:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.362 [2024-11-20 16:28:21.091986] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:35:20.362 [2024-11-20 16:28:21.092035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017068 ] 00:35:20.362 [2024-11-20 16:28:21.164961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.620 [2024-11-20 16:28:21.206327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.620 [2024-11-20 16:28:21.368805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:21.187 16:28:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.187 16:28:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:21.187 16:28:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:21.187 16:28:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:21.187 16:28:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.446 16:28:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:21.446 16:28:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:21.446 16:28:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:21.446 16:28:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.446 16:28:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.446 16:28:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.446 16:28:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.704 16:28:22 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:21.704 16:28:22 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:21.704 16:28:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:21.704 16:28:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.704 16:28:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:21.704 16:28:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.704 16:28:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.704 16:28:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:21.704 16:28:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:21.704 16:28:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:21.704 16:28:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:21.963 16:28:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:21.963 16:28:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:21.963 16:28:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JP3A4Mb0Bc /tmp/tmp.2lNdP7gwFE 00:35:21.963 16:28:22 keyring_file -- keyring/file.sh@20 -- # killprocess 3017068 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3017068 ']' 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3017068 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017068 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017068' 00:35:21.963 killing process with pid 3017068 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@973 -- # kill 3017068 00:35:21.963 Received shutdown signal, test time was about 1.000000 seconds 00:35:21.963 00:35:21.963 Latency(us) 00:35:21.963 [2024-11-20T15:28:22.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.963 [2024-11-20T15:28:22.800Z] =================================================================================================================== 00:35:21.963 [2024-11-20T15:28:22.800Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:21.963 16:28:22 keyring_file -- common/autotest_common.sh@978 -- # wait 3017068 00:35:22.222 16:28:22 keyring_file -- keyring/file.sh@21 -- # killprocess 3015545 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3015545 ']' 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3015545 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015545 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015545' 00:35:22.222 killing process with pid 3015545 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@973 -- # kill 3015545 00:35:22.222 16:28:22 keyring_file -- common/autotest_common.sh@978 -- # wait 3015545 00:35:22.482 00:35:22.482 real 0m11.768s 00:35:22.482 user 0m29.245s 00:35:22.482 sys 0m2.647s 00:35:22.482 16:28:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.482 16:28:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:22.482 ************************************ 00:35:22.482 END TEST keyring_file 00:35:22.482 ************************************ 00:35:22.742 16:28:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:22.742 16:28:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:22.742 16:28:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:22.742 16:28:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.742 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:35:22.742 ************************************ 00:35:22.742 START TEST keyring_linux 00:35:22.742 ************************************ 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:22.742 Joined session keyring: 74438113 00:35:22.742 * Looking for test storage... 00:35:22.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.742 --rc genhtml_branch_coverage=1 00:35:22.742 --rc genhtml_function_coverage=1 00:35:22.742 --rc genhtml_legend=1 00:35:22.742 --rc geninfo_all_blocks=1 00:35:22.742 --rc geninfo_unexecuted_blocks=1 00:35:22.742 00:35:22.742 ' 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.742 --rc genhtml_branch_coverage=1 00:35:22.742 --rc genhtml_function_coverage=1 00:35:22.742 --rc genhtml_legend=1 00:35:22.742 --rc geninfo_all_blocks=1 00:35:22.742 --rc geninfo_unexecuted_blocks=1 00:35:22.742 00:35:22.742 ' 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.742 --rc genhtml_branch_coverage=1 00:35:22.742 --rc genhtml_function_coverage=1 00:35:22.742 --rc genhtml_legend=1 00:35:22.742 --rc geninfo_all_blocks=1 00:35:22.742 --rc geninfo_unexecuted_blocks=1 00:35:22.742 00:35:22.742 ' 00:35:22.742 16:28:23 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.742 --rc genhtml_branch_coverage=1 00:35:22.742 --rc genhtml_function_coverage=1 00:35:22.742 --rc genhtml_legend=1 00:35:22.742 --rc geninfo_all_blocks=1 00:35:22.742 --rc geninfo_unexecuted_blocks=1 00:35:22.742 00:35:22.742 ' 00:35:22.742 16:28:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:22.742 16:28:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:22.742 16:28:23 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.742 16:28:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.742 16:28:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.742 16:28:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.743 16:28:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.743 16:28:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:22.743 16:28:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.743 16:28:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:22.743 16:28:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:22.743 16:28:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:23.003 /tmp/:spdk-test:key0 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:23.003 16:28:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:23.003 16:28:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:23.003 /tmp/:spdk-test:key1 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3017620 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3017620 00:35:23.003 16:28:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:23.003 16:28:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3017620 ']' 00:35:23.003 16:28:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.003 16:28:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.003 16:28:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.003 16:28:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.003 16:28:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:23.003 [2024-11-20 16:28:23.722661] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:35:23.003 [2024-11-20 16:28:23.722712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017620 ] 00:35:23.003 [2024-11-20 16:28:23.797842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.262 [2024-11-20 16:28:23.838198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:23.830 16:28:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:23.830 [2024-11-20 16:28:24.560418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.830 null0 00:35:23.830 [2024-11-20 16:28:24.592465] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:23.830 [2024-11-20 16:28:24.592835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.830 16:28:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:23.830 216995353 00:35:23.830 16:28:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:23.830 283955819 00:35:23.830 16:28:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3017822 00:35:23.830 16:28:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3017822 /var/tmp/bperf.sock 00:35:23.830 16:28:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3017822 ']' 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.830 16:28:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:24.089 [2024-11-20 16:28:24.667607] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:35:24.089 [2024-11-20 16:28:24.667651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017822 ] 00:35:24.089 [2024-11-20 16:28:24.744098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.089 [2024-11-20 16:28:24.786994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.089 16:28:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.089 16:28:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:24.089 16:28:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:24.089 16:28:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:24.348 16:28:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:24.348 16:28:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:24.606 16:28:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:24.606 16:28:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:24.606 [2024-11-20 16:28:25.432571] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:24.866 nvme0n1 00:35:24.866 16:28:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:24.866 16:28:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:24.866 16:28:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:24.866 16:28:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:24.866 16:28:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:24.866 16:28:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:25.126 16:28:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:25.126 16:28:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:25.126 16:28:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@25 -- # sn=216995353 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@26 -- # [[ 216995353 == \2\1\6\9\9\5\3\5\3 ]] 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 216995353 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:25.126 16:28:25 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:25.385 Running I/O for 1 seconds... 00:35:26.322 20763.00 IOPS, 81.11 MiB/s 00:35:26.322 Latency(us) 00:35:26.322 [2024-11-20T15:28:27.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.322 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:26.322 nvme0n1 : 1.01 20763.62 81.11 0.00 0.00 6143.81 4274.09 9630.94 00:35:26.322 [2024-11-20T15:28:27.159Z] =================================================================================================================== 00:35:26.322 [2024-11-20T15:28:27.159Z] Total : 20763.62 81.11 0.00 0.00 6143.81 4274.09 9630.94 00:35:26.322 { 00:35:26.322 "results": [ 00:35:26.322 { 00:35:26.322 "job": "nvme0n1", 00:35:26.322 "core_mask": "0x2", 00:35:26.322 "workload": "randread", 00:35:26.322 "status": "finished", 00:35:26.322 "queue_depth": 128, 00:35:26.322 "io_size": 4096, 00:35:26.322 "runtime": 1.006135, 00:35:26.322 "iops": 20763.615220621487, 00:35:26.322 "mibps": 81.10787195555268, 00:35:26.322 "io_failed": 0, 00:35:26.322 "io_timeout": 0, 00:35:26.322 "avg_latency_us": 6143.810583879474, 00:35:26.322 "min_latency_us": 4274.086956521739, 00:35:26.322 "max_latency_us": 9630.942608695652 00:35:26.322 } 00:35:26.322 ], 00:35:26.322 "core_count": 1 00:35:26.322 } 00:35:26.322 16:28:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:26.322 16:28:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:26.581 16:28:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:26.581 16:28:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:26.581 16:28:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:26.581 16:28:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:26.581 16:28:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:26.581 16:28:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:26.839 16:28:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:26.839 16:28:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:26.839 16:28:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:26.839 16:28:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.839 16:28:27 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:26.839 16:28:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:26.839 [2024-11-20 16:28:27.611670] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:26.839 [2024-11-20 16:28:27.612631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e46f60 (107): Transport endpoint is not connected 00:35:26.839 [2024-11-20 16:28:27.613626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e46f60 (9): Bad file descriptor 00:35:26.840 [2024-11-20 16:28:27.614627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:26.840 [2024-11-20 16:28:27.614639] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:26.840 [2024-11-20 16:28:27.614646] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:26.840 [2024-11-20 16:28:27.614655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:26.840 request: 00:35:26.840 { 00:35:26.840 "name": "nvme0", 00:35:26.840 "trtype": "tcp", 00:35:26.840 "traddr": "127.0.0.1", 00:35:26.840 "adrfam": "ipv4", 00:35:26.840 "trsvcid": "4420", 00:35:26.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.840 "prchk_reftag": false, 00:35:26.840 "prchk_guard": false, 00:35:26.840 "hdgst": false, 00:35:26.840 "ddgst": false, 00:35:26.840 "psk": ":spdk-test:key1", 00:35:26.840 "allow_unrecognized_csi": false, 00:35:26.840 "method": "bdev_nvme_attach_controller", 00:35:26.840 "req_id": 1 00:35:26.840 } 00:35:26.840 Got JSON-RPC error response 00:35:26.840 response: 00:35:26.840 { 00:35:26.840 "code": -5, 00:35:26.840 "message": "Input/output error" 00:35:26.840 } 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@33 -- # sn=216995353 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 216995353 00:35:26.840 1 links removed 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@33 -- # sn=283955819 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 283955819 00:35:26.840 1 links removed 00:35:26.840 16:28:27 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3017822 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3017822 ']' 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3017822 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.840 16:28:27 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017822 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017822' 00:35:27.099 killing process with pid 3017822 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@973 -- # kill 3017822 00:35:27.099 Received shutdown signal, test time was about 1.000000 seconds 00:35:27.099 00:35:27.099 Latency(us) 00:35:27.099 [2024-11-20T15:28:27.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.099 [2024-11-20T15:28:27.936Z] =================================================================================================================== 00:35:27.099 [2024-11-20T15:28:27.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@978 -- # wait 3017822 00:35:27.099 16:28:27 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3017620 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3017620 ']' 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3017620 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017620 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017620' 00:35:27.099 killing process with pid 3017620 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@973 -- # kill 3017620 00:35:27.099 16:28:27 keyring_linux -- common/autotest_common.sh@978 -- # wait 3017620 00:35:27.665 00:35:27.665 real 0m4.842s 00:35:27.665 user 0m8.806s 00:35:27.665 sys 0m1.451s 00:35:27.665 16:28:28 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.665 16:28:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:27.665 ************************************ 00:35:27.665 END TEST keyring_linux 00:35:27.665 ************************************ 00:35:27.665 16:28:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:27.665 16:28:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:27.665 16:28:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:27.665 16:28:28 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:27.665 16:28:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:27.665 16:28:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:27.665 16:28:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:27.665 16:28:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.665 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:35:27.665 16:28:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:27.665 16:28:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:27.665 16:28:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:27.665 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:35:32.960 INFO: APP EXITING 00:35:32.960 INFO: killing all VMs 00:35:32.960 INFO: killing vhost app 00:35:32.960 INFO: EXIT DONE 00:35:35.498 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:35.498 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:35.498 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:38.793 Cleaning 00:35:38.793 Removing: /var/run/dpdk/spdk0/config 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:38.793 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:38.793 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:38.793 Removing: /var/run/dpdk/spdk1/config 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:38.793 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:38.793 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:38.793 Removing: /var/run/dpdk/spdk2/config 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:38.793 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:38.793 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:38.793 Removing: /var/run/dpdk/spdk3/config 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:38.793 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:38.793 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:38.793 Removing: /var/run/dpdk/spdk4/config 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:38.793 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:38.793 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:38.793 Removing: /dev/shm/bdev_svc_trace.1 00:35:38.793 Removing: /dev/shm/nvmf_trace.0 00:35:38.793 Removing: /dev/shm/spdk_tgt_trace.pid2537553 00:35:38.793 Removing: /var/run/dpdk/spdk0 00:35:38.793 Removing: /var/run/dpdk/spdk1 00:35:38.793 Removing: /var/run/dpdk/spdk2 00:35:38.793 Removing: /var/run/dpdk/spdk3 00:35:38.793 Removing: /var/run/dpdk/spdk4 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2535400 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2536468 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2537553 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2538188 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2539124 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2539157 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2540128 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2540356 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2540569 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2542235 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2543645 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2544072 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2544644 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2544934 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2545225 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2545478 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2545724 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2546015 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2546754 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2549758 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2550014 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2550268 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2550285 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2550772 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2550782 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2551272 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2551283 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2551544 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2551707 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2551816 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2552034 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2552404 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2552628 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2552940 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2556809 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2561114 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2571150 00:35:38.793 Removing: /var/run/dpdk/spdk_pid2571842 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2576260 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2576592 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2580857 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2586750 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2590092 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2600318 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2609264 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2611074 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2611997 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2628896 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2632967 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2679853 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2685249 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2691547 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2698247 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2698249 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2699145 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2699881 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2700784 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2701443 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2701488 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2701719 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2701732 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2701839 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2702648 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2703568 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2704482 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2704953 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2705112 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2705400 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2706424 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2707399 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2715588 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2744927 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2749460 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2751159 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2752862 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2753018 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2753248 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2753266 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2753770 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2755604 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2756402 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2756873 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2759022 00:35:38.794 Removing: /var/run/dpdk/spdk_pid2759603 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2760324 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2764893 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2770380 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2770381 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2770382 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2774166 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2782499 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2786302 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2792306 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2793603 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2795019 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2796329 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2800944 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2805278 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2809287 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2817212 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2817321 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2821913 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2822140 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2822374 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2822831 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2822838 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2827321 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2827870 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2832225 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2834760 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2840157 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2845499 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2854285 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2861787 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2861799 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2880586 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2881060 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2881750 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2882226 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2882962 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2883443 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2884075 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2884613 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2888671 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2889042 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2894953 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2895213 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2900469 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2904700 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2915118 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2915632 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2919888 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2920138 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2924404 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2930036 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2932626 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2942664 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2951462 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2953139 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2954104 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2970403 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2974425 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2977110 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2984849 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2984854 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2989900 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2991857 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2993818 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2994998 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2997016 00:35:39.053 Removing: /var/run/dpdk/spdk_pid2998255 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3007383 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3007845 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3008521 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3010786 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3011251 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3011720 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3015545 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3015549 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3017068 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3017620 00:35:39.312 Removing: /var/run/dpdk/spdk_pid3017822 00:35:39.312 Clean 00:35:39.312 16:28:40 -- common/autotest_common.sh@1453 -- # return 0 00:35:39.312 16:28:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:39.312 16:28:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.312 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:35:39.312 16:28:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:39.312 16:28:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.312 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:35:39.312 16:28:40 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:39.312 16:28:40 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:39.312 16:28:40 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:39.312 16:28:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:39.312 16:28:40 -- spdk/autotest.sh@398 -- # hostname 00:35:39.312 16:28:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:39.571 geninfo: WARNING: invalid characters removed from testname! 00:36:01.509 16:29:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:03.413 16:29:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.318 16:29:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:07.223 16:29:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:09.309 16:29:09 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:11.215 16:29:11 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:13.121 16:29:13 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:13.121 16:29:13 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:13.121 16:29:13 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:13.121 16:29:13 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:13.121 16:29:13 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:13.121 16:29:13 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:13.121 + [[ -n 2458439 ]] 00:36:13.121 + sudo kill 2458439 00:36:13.130 [Pipeline] } 00:36:13.145 [Pipeline] // stage 00:36:13.151 [Pipeline] } 00:36:13.166 [Pipeline] // timeout 00:36:13.173 [Pipeline] } 00:36:13.187 [Pipeline] // catchError 00:36:13.192 [Pipeline] } 00:36:13.206 [Pipeline] // wrap 00:36:13.212 [Pipeline] } 00:36:13.225 [Pipeline] // catchError 00:36:13.234 [Pipeline] stage 00:36:13.236 [Pipeline] { (Epilogue) 00:36:13.250 [Pipeline] catchError 00:36:13.252 [Pipeline] { 00:36:13.265 [Pipeline] echo 00:36:13.267 Cleanup processes 00:36:13.273 [Pipeline] sh 00:36:13.558 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:13.558 3028305 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:13.573 [Pipeline] sh 00:36:13.858 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:13.858 ++ grep -v 'sudo pgrep' 00:36:13.858 ++ awk '{print $1}' 00:36:13.858 + sudo kill -9 00:36:13.858 + true 00:36:13.871 [Pipeline] sh 00:36:14.158 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:26.380 [Pipeline] sh 00:36:26.665 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:26.665 Artifacts sizes are good 00:36:26.680 [Pipeline] archiveArtifacts 00:36:26.687 Archiving artifacts 00:36:26.815 [Pipeline] sh 00:36:27.107 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:27.123 [Pipeline] cleanWs 00:36:27.133 [WS-CLEANUP] Deleting project workspace... 00:36:27.133 [WS-CLEANUP] Deferred wipeout is used... 00:36:27.140 [WS-CLEANUP] done 00:36:27.142 [Pipeline] } 00:36:27.159 [Pipeline] // catchError 00:36:27.171 [Pipeline] sh 00:36:27.453 + logger -p user.info -t JENKINS-CI 00:36:27.463 [Pipeline] } 00:36:27.477 [Pipeline] // stage 00:36:27.482 [Pipeline] } 00:36:27.495 [Pipeline] // node 00:36:27.500 [Pipeline] End of Pipeline 00:36:27.538 Finished: SUCCESS